00:00:00.002 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3901 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3496 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.138 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.139 The recommended git tool is: git 00:00:00.140 using credential 00000000-0000-0000-0000-000000000002 00:00:00.141 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.207 Fetching changes from the remote Git repository 00:00:00.210 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.263 Using shallow fetch with depth 1 00:00:00.263 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.263 > git --version # timeout=10 00:00:00.304 > git --version # 'git version 2.39.2' 00:00:00.304 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.333 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.333 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.536 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.547 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.559 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:07.559 > git config core.sparsecheckout # timeout=10 00:00:07.569 > git read-tree -mu HEAD # timeout=10 00:00:07.587 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:07.612 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:07.613 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:07.768 [Pipeline] Start of Pipeline 00:00:07.784 [Pipeline] library 00:00:07.785 Loading library shm_lib@master 00:00:07.785 Library shm_lib@master is cached. Copying from home. 00:00:07.803 [Pipeline] node 00:00:07.819 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.821 [Pipeline] { 00:00:07.831 [Pipeline] catchError 00:00:07.833 [Pipeline] { 00:00:07.847 [Pipeline] wrap 00:00:07.855 [Pipeline] { 00:00:07.862 [Pipeline] stage 00:00:07.863 [Pipeline] { (Prologue) 00:00:07.878 [Pipeline] echo 00:00:07.879 Node: VM-host-SM38 00:00:07.884 [Pipeline] cleanWs 00:00:07.894 [WS-CLEANUP] Deleting project workspace... 00:00:07.894 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.901 [WS-CLEANUP] done 00:00:08.211 [Pipeline] setCustomBuildProperty 00:00:08.302 [Pipeline] httpRequest 00:00:08.907 [Pipeline] echo 00:00:08.908 Sorcerer 10.211.164.101 is alive 00:00:08.916 [Pipeline] retry 00:00:08.918 [Pipeline] { 00:00:08.930 [Pipeline] httpRequest 00:00:08.934 HttpMethod: GET 00:00:08.935 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.935 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.937 Response Code: HTTP/1.1 200 OK 00:00:08.937 Success: Status code 200 is in the accepted range: 200,404 00:00:08.938 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:10.053 [Pipeline] } 00:00:10.064 [Pipeline] // retry 00:00:10.069 [Pipeline] sh 00:00:10.351 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:10.366 [Pipeline] httpRequest 00:00:11.718 [Pipeline] echo 00:00:11.720 Sorcerer 10.211.164.101 is alive 00:00:11.728 [Pipeline] retry 00:00:11.730 [Pipeline] { 00:00:11.744 [Pipeline] httpRequest 00:00:11.748 HttpMethod: GET 00:00:11.749 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:11.750 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:11.774 Response Code: HTTP/1.1 200 OK 00:00:11.775 Success: Status code 200 is in the accepted range: 200,404 00:00:11.776 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:54.042 [Pipeline] } 00:01:54.060 [Pipeline] // retry 00:01:54.069 [Pipeline] sh 00:01:54.353 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:57.664 [Pipeline] sh 00:01:57.950 + git -C spdk log --oneline -n5 00:01:57.950 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:57.950 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:57.950 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:57.950 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:57.950 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:57.970 [Pipeline] withCredentials 00:01:57.982 > git --version # timeout=10 00:01:57.997 > git --version # 'git version 2.39.2' 00:01:58.017 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:58.019 [Pipeline] { 00:01:58.028 [Pipeline] retry 00:01:58.029 [Pipeline] { 00:01:58.044 [Pipeline] sh 00:01:58.331 + git ls-remote http://dpdk.org/git/dpdk main 00:01:58.346 [Pipeline] } 00:01:58.363 [Pipeline] // retry 00:01:58.369 [Pipeline] } 00:01:58.385 [Pipeline] // withCredentials 00:01:58.395 [Pipeline] httpRequest 00:01:58.811 [Pipeline] echo 00:01:58.813 Sorcerer 10.211.164.101 is alive 00:01:58.822 [Pipeline] retry 00:01:58.824 [Pipeline] { 00:01:58.838 [Pipeline] httpRequest 00:01:58.844 HttpMethod: GET 00:01:58.844 URL: http://10.211.164.101/packages/dpdk_bf0ff8df59c7e32f95c0b542cc4a7918f8a3da84.tar.gz 00:01:58.845 Sending request to url: http://10.211.164.101/packages/dpdk_bf0ff8df59c7e32f95c0b542cc4a7918f8a3da84.tar.gz 00:01:58.862 Response Code: HTTP/1.1 200 OK 00:01:58.863 Success: Status code 200 is in the accepted range: 200,404 00:01:58.863 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_bf0ff8df59c7e32f95c0b542cc4a7918f8a3da84.tar.gz 00:02:04.585 [Pipeline] } 00:02:04.600 [Pipeline] // retry 00:02:04.606 [Pipeline] sh 00:02:04.887 + tar --no-same-owner -xf dpdk_bf0ff8df59c7e32f95c0b542cc4a7918f8a3da84.tar.gz 00:02:06.286 [Pipeline] sh 00:02:06.570 + git -C dpdk log --oneline -n5 00:02:06.570 bf0ff8df59 maintainers: fix prog guide paths 00:02:06.570 41dd9a6bc2 doc: reorganize prog guide 00:02:06.570 cb9187bc5c version: 24.11-rc0 00:02:06.570 b3485f4293 version: 24.07.0 00:02:06.570 fa58aec335 doc: add tested platforms with NVIDIA NICs 00:02:06.588 [Pipeline] writeFile 00:02:06.604 [Pipeline] sh 00:02:06.888 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:06.901 [Pipeline] sh 00:02:07.184 + cat autorun-spdk.conf 00:02:07.184 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.184 SPDK_TEST_NVME=1 00:02:07.184 SPDK_TEST_FTL=1 00:02:07.184 SPDK_TEST_ISAL=1 00:02:07.184 SPDK_RUN_ASAN=1 00:02:07.184 SPDK_RUN_UBSAN=1 00:02:07.184 SPDK_TEST_XNVME=1 00:02:07.184 SPDK_TEST_NVME_FDP=1 00:02:07.184 SPDK_TEST_NATIVE_DPDK=main 00:02:07.184 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.184 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.193 RUN_NIGHTLY=1 00:02:07.195 [Pipeline] } 00:02:07.209 [Pipeline] // stage 00:02:07.225 [Pipeline] stage 00:02:07.227 [Pipeline] { (Run VM) 00:02:07.239 [Pipeline] sh 00:02:07.563 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:07.563 + echo 'Start stage prepare_nvme.sh' 00:02:07.563 Start stage prepare_nvme.sh 00:02:07.563 + [[ -n 1 ]] 00:02:07.563 + disk_prefix=ex1 00:02:07.563 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:07.563 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:07.563 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:07.563 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.563 ++ SPDK_TEST_NVME=1 00:02:07.563 ++ SPDK_TEST_FTL=1 00:02:07.563 ++ SPDK_TEST_ISAL=1 00:02:07.563 ++ SPDK_RUN_ASAN=1 00:02:07.563 ++ SPDK_RUN_UBSAN=1 00:02:07.563 ++ SPDK_TEST_XNVME=1 00:02:07.563 ++ SPDK_TEST_NVME_FDP=1 00:02:07.563 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:07.563 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:07.563 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:07.563 ++ RUN_NIGHTLY=1 00:02:07.563 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:07.563 + nvme_files=() 00:02:07.563 + declare -A nvme_files 00:02:07.563 + backend_dir=/var/lib/libvirt/images/backends 00:02:07.563 + nvme_files['nvme.img']=5G 00:02:07.563 + nvme_files['nvme-cmb.img']=5G 00:02:07.563 + nvme_files['nvme-multi0.img']=4G 00:02:07.563 + nvme_files['nvme-multi1.img']=4G 00:02:07.563 + nvme_files['nvme-multi2.img']=4G 00:02:07.563 + nvme_files['nvme-openstack.img']=8G 00:02:07.563 + nvme_files['nvme-zns.img']=5G 00:02:07.563 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:07.563 + (( SPDK_TEST_FTL == 1 )) 00:02:07.563 + nvme_files["nvme-ftl.img"]=6G 00:02:07.563 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:07.563 + nvme_files["nvme-fdp.img"]=1G 00:02:07.563 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:07.563 + for nvme in "${!nvme_files[@]}" 00:02:07.563 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:02:07.563 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.563 + for nvme in "${!nvme_files[@]}" 00:02:07.563 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:02:07.563 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:07.563 + for nvme in "${!nvme_files[@]}" 00:02:07.563 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:02:07.829 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:07.829 + for nvme in "${!nvme_files[@]}" 00:02:07.830 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:02:07.830 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:07.830 + for nvme in "${!nvme_files[@]}" 00:02:07.830 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:02:07.830 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:07.830 + for nvme in "${!nvme_files[@]}" 00:02:07.830 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:02:07.830 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.830 + for nvme in "${!nvme_files[@]}" 00:02:07.830 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:02:07.830 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.830 + for nvme in "${!nvme_files[@]}" 00:02:07.830 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:02:07.830 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:07.830 + for nvme in "${!nvme_files[@]}" 00:02:07.830 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:02:08.773 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:08.773 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:02:08.773 + echo 'End stage prepare_nvme.sh' 00:02:08.773 End stage prepare_nvme.sh 00:02:08.788 [Pipeline] sh 00:02:09.077 + DISTRO=fedora39 00:02:09.077 + CPUS=10 00:02:09.077 + RAM=12288 00:02:09.077 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:09.077 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:09.077 00:02:09.077 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:09.077 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:09.077 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:09.077 HELP=0 00:02:09.077 DRY_RUN=0 00:02:09.077 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:02:09.077 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:09.077 NVME_AUTO_CREATE=0 00:02:09.077 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:02:09.077 NVME_CMB=,,,, 00:02:09.077 NVME_PMR=,,,, 00:02:09.077 NVME_ZNS=,,,, 00:02:09.077 NVME_MS=true,,,, 00:02:09.077 NVME_FDP=,,,on, 00:02:09.077 SPDK_VAGRANT_DISTRO=fedora39 00:02:09.077 SPDK_VAGRANT_VMCPU=10 00:02:09.077 SPDK_VAGRANT_VMRAM=12288 00:02:09.077 SPDK_VAGRANT_PROVIDER=libvirt 00:02:09.077 SPDK_VAGRANT_HTTP_PROXY= 00:02:09.077 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:09.077 SPDK_OPENSTACK_NETWORK=0 00:02:09.077 VAGRANT_PACKAGE_BOX=0 00:02:09.077 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:09.077 FORCE_DISTRO=true 00:02:09.077 VAGRANT_BOX_VERSION= 00:02:09.077 EXTRA_VAGRANTFILES= 00:02:09.077 NIC_MODEL=e1000 00:02:09.077 00:02:09.077 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:09.077 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:11.624 Bringing machine 'default' up with 'libvirt' provider... 00:02:11.884 ==> default: Creating image (snapshot of base box volume). 00:02:11.884 ==> default: Creating domain with the following settings... 00:02:11.884 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727732636_78aa1ddf04e8e2d4d4ab 00:02:11.884 ==> default: -- Domain type: kvm 00:02:11.884 ==> default: -- Cpus: 10 00:02:11.884 ==> default: -- Feature: acpi 00:02:11.884 ==> default: -- Feature: apic 00:02:11.884 ==> default: -- Feature: pae 00:02:11.884 ==> default: -- Memory: 12288M 00:02:11.884 ==> default: -- Memory Backing: hugepages: 00:02:11.884 ==> default: -- Management MAC: 00:02:11.884 ==> default: -- Loader: 00:02:11.884 ==> default: -- Nvram: 00:02:11.884 ==> default: -- Base box: spdk/fedora39 00:02:11.884 ==> default: -- Storage pool: default 00:02:11.884 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727732636_78aa1ddf04e8e2d4d4ab.img (20G) 00:02:11.884 ==> default: -- Volume Cache: default 00:02:11.884 ==> default: -- Kernel: 00:02:11.884 ==> default: -- Initrd: 00:02:11.884 ==> default: -- Graphics Type: vnc 00:02:11.884 ==> default: -- Graphics Port: -1 00:02:11.884 ==> default: -- Graphics IP: 127.0.0.1 00:02:11.884 ==> default: -- Graphics Password: Not defined 00:02:11.884 ==> default: -- Video Type: cirrus 00:02:11.884 ==> default: -- Video VRAM: 9216 00:02:11.884 ==> default: -- Sound Type: 00:02:11.884 ==> default: -- Keymap: en-us 00:02:11.884 ==> default: -- TPM Path: 00:02:11.884 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:11.884 ==> default: -- Command line args: 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:11.884 ==> default: -> value=-drive, 00:02:11.884 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:11.884 ==> default: -> value=-drive, 00:02:11.884 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:11.884 ==> default: -> value=-drive, 00:02:11.884 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.884 ==> default: -> value=-drive, 00:02:11.884 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.884 ==> default: -> value=-drive, 00:02:11.884 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:11.884 ==> default: -> value=-drive, 00:02:11.884 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:11.884 ==> default: -> value=-device, 00:02:11.884 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:12.145 ==> default: Creating shared folders metadata... 00:02:12.145 ==> default: Starting domain. 00:02:14.061 ==> default: Waiting for domain to get an IP address... 00:02:32.195 ==> default: Waiting for SSH to become available... 00:02:32.195 ==> default: Configuring and enabling network interfaces... 00:02:35.475 default: SSH address: 192.168.121.67:22 00:02:35.475 default: SSH username: vagrant 00:02:35.475 default: SSH auth method: private key 00:02:37.373 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:45.482 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:49.660 ==> default: Mounting SSHFS shared folder... 00:02:50.680 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:50.680 ==> default: Checking Mount.. 00:02:51.613 ==> default: Folder Successfully Mounted! 00:02:51.613 00:02:51.613 SUCCESS! 00:02:51.613 00:02:51.613 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:51.613 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:51.613 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:51.613 00:02:51.620 [Pipeline] } 00:02:51.633 [Pipeline] // stage 00:02:51.640 [Pipeline] dir 00:02:51.640 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:51.642 [Pipeline] { 00:02:51.653 [Pipeline] catchError 00:02:51.655 [Pipeline] { 00:02:51.666 [Pipeline] sh 00:02:51.943 + vagrant ssh-config --host vagrant 00:02:51.943 + sed -ne '/^Host/,$p' 00:02:51.943 + tee ssh_conf 00:02:54.476 Host vagrant 00:02:54.476 HostName 192.168.121.67 00:02:54.476 User vagrant 00:02:54.476 Port 22 00:02:54.476 UserKnownHostsFile /dev/null 00:02:54.476 StrictHostKeyChecking no 00:02:54.476 PasswordAuthentication no 00:02:54.476 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:54.476 IdentitiesOnly yes 00:02:54.476 LogLevel FATAL 00:02:54.476 ForwardAgent yes 00:02:54.476 ForwardX11 yes 00:02:54.476 00:02:54.487 [Pipeline] withEnv 00:02:54.489 [Pipeline] { 00:02:54.497 [Pipeline] sh 00:02:54.777 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:54.777 source /etc/os-release 00:02:54.777 [[ -e /image.version ]] && img=$(< /image.version) 00:02:54.777 # Minimal, systemd-like check. 00:02:54.777 if [[ -e /.dockerenv ]]; then 00:02:54.777 # Clear garbage from the node'\''s name: 00:02:54.777 # agt-er_autotest_547-896 -> autotest_547-896 00:02:54.777 # $HOSTNAME is the actual container id 00:02:54.777 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:54.777 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:54.777 # We can assume this is a mount from a host where container is running, 00:02:54.777 # so fetch its hostname to easily identify the target swarm worker. 00:02:54.777 container="$(< /etc/hostname) ($agent)" 00:02:54.777 else 00:02:54.777 # Fallback 00:02:54.777 container=$agent 00:02:54.777 fi 00:02:54.777 fi 00:02:54.777 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:54.777 ' 00:02:55.052 [Pipeline] } 00:02:55.071 [Pipeline] // withEnv 00:02:55.079 [Pipeline] setCustomBuildProperty 00:02:55.095 [Pipeline] stage 00:02:55.097 [Pipeline] { (Tests) 00:02:55.114 [Pipeline] sh 00:02:55.398 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:55.670 [Pipeline] sh 00:02:55.953 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:56.226 [Pipeline] timeout 00:02:56.226 Timeout set to expire in 50 min 00:02:56.227 [Pipeline] { 00:02:56.237 [Pipeline] sh 00:02:56.540 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:57.107 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:57.120 [Pipeline] sh 00:02:57.402 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:57.669 [Pipeline] sh 00:02:57.944 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:58.222 [Pipeline] sh 00:02:58.506 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:58.768 ++ readlink -f spdk_repo 00:02:58.768 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:58.768 + [[ -n /home/vagrant/spdk_repo ]] 00:02:58.768 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:58.768 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:58.768 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:58.768 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:58.768 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:58.768 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:58.768 + cd /home/vagrant/spdk_repo 00:02:58.768 + source /etc/os-release 00:02:58.768 ++ NAME='Fedora Linux' 00:02:58.768 ++ VERSION='39 (Cloud Edition)' 00:02:58.768 ++ ID=fedora 00:02:58.768 ++ VERSION_ID=39 00:02:58.768 ++ VERSION_CODENAME= 00:02:58.768 ++ PLATFORM_ID=platform:f39 00:02:58.768 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:58.768 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:58.768 ++ LOGO=fedora-logo-icon 00:02:58.768 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:58.768 ++ HOME_URL=https://fedoraproject.org/ 00:02:58.768 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:58.768 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:58.768 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:58.769 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:58.769 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:58.769 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:58.769 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:58.769 ++ SUPPORT_END=2024-11-12 00:02:58.769 ++ VARIANT='Cloud Edition' 00:02:58.769 ++ VARIANT_ID=cloud 00:02:58.769 + uname -a 00:02:58.769 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:58.769 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:59.030 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:59.292 Hugepages 00:02:59.292 node hugesize free / total 00:02:59.292 node0 1048576kB 0 / 0 00:02:59.292 node0 2048kB 0 / 0 00:02:59.292 00:02:59.292 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.292 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:59.292 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:59.292 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:59.553 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:59.553 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:59.553 + rm -f /tmp/spdk-ld-path 00:02:59.553 + source autorun-spdk.conf 00:02:59.553 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.553 ++ SPDK_TEST_NVME=1 00:02:59.553 ++ SPDK_TEST_FTL=1 00:02:59.553 ++ SPDK_TEST_ISAL=1 00:02:59.553 ++ SPDK_RUN_ASAN=1 00:02:59.553 ++ SPDK_RUN_UBSAN=1 00:02:59.553 ++ SPDK_TEST_XNVME=1 00:02:59.553 ++ SPDK_TEST_NVME_FDP=1 00:02:59.553 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:59.553 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:59.553 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.553 ++ RUN_NIGHTLY=1 00:02:59.553 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:59.553 + [[ -n '' ]] 00:02:59.553 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:59.553 + for M in /var/spdk/build-*-manifest.txt 00:02:59.553 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:59.553 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.553 + for M in /var/spdk/build-*-manifest.txt 00:02:59.553 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:59.553 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.553 + for M in /var/spdk/build-*-manifest.txt 00:02:59.553 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:59.553 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:59.553 ++ uname 00:02:59.553 + [[ Linux == \L\i\n\u\x ]] 00:02:59.553 + sudo dmesg -T 00:02:59.553 + sudo dmesg --clear 00:02:59.553 + dmesg_pid=5753 00:02:59.553 + [[ Fedora Linux == FreeBSD ]] 00:02:59.553 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.553 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:59.553 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:59.553 + sudo dmesg -Tw 00:02:59.553 + [[ -x /usr/src/fio-static/fio ]] 00:02:59.553 + export FIO_BIN=/usr/src/fio-static/fio 00:02:59.553 + FIO_BIN=/usr/src/fio-static/fio 00:02:59.553 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:59.553 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:59.553 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:59.553 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.553 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:59.553 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:59.553 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.553 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:59.553 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:59.553 Test configuration: 00:02:59.553 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:59.553 SPDK_TEST_NVME=1 00:02:59.553 SPDK_TEST_FTL=1 00:02:59.553 SPDK_TEST_ISAL=1 00:02:59.553 SPDK_RUN_ASAN=1 00:02:59.553 SPDK_RUN_UBSAN=1 00:02:59.553 SPDK_TEST_XNVME=1 00:02:59.553 SPDK_TEST_NVME_FDP=1 00:02:59.553 SPDK_TEST_NATIVE_DPDK=main 00:02:59.553 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:59.553 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:59.815 RUN_NIGHTLY=1 21:44:44 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:59.815 21:44:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:59.815 21:44:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:59.815 21:44:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:59.815 21:44:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.815 21:44:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.815 21:44:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.815 21:44:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.815 21:44:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.815 21:44:44 -- paths/export.sh@5 -- $ export PATH 00:02:59.815 21:44:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:59.815 21:44:44 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:59.815 21:44:44 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:59.815 21:44:44 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727732684.XXXXXX 00:02:59.815 21:44:44 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727732684.3nYtZI 00:02:59.815 21:44:44 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:59.815 21:44:44 -- common/autobuild_common.sh@485 -- $ '[' -n main ']' 00:02:59.815 21:44:44 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:59.815 21:44:44 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:59.815 21:44:44 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:59.815 21:44:44 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:59.815 21:44:44 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:59.815 21:44:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:59.815 21:44:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.815 21:44:44 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:59.815 21:44:44 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:59.815 21:44:44 -- pm/common@17 -- $ local monitor 00:02:59.815 21:44:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.815 21:44:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:59.815 21:44:44 -- pm/common@25 -- $ sleep 1 00:02:59.815 21:44:44 -- pm/common@21 -- $ date +%s 00:02:59.815 21:44:44 -- pm/common@21 -- $ date +%s 00:02:59.815 21:44:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727732684 00:02:59.815 21:44:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727732684 00:02:59.815 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727732684_collect-vmstat.pm.log 00:02:59.815 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727732684_collect-cpu-load.pm.log 00:03:00.756 21:44:45 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:03:00.756 21:44:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:00.756 21:44:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:00.756 21:44:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:00.756 21:44:45 -- spdk/autobuild.sh@16 -- $ date -u 00:03:00.756 Mon Sep 30 09:44:45 PM UTC 2024 00:03:00.756 21:44:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:00.756 v25.01-pre-17-g09cc66129 00:03:00.756 21:44:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:00.756 21:44:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:00.756 21:44:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:00.756 21:44:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.756 21:44:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.756 ************************************ 00:03:00.756 START TEST asan 00:03:00.756 ************************************ 00:03:00.756 using asan 00:03:00.756 21:44:45 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:03:00.756 00:03:00.756 real 0m0.000s 00:03:00.756 user 0m0.000s 00:03:00.756 sys 0m0.000s 00:03:00.756 21:44:45 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:00.756 ************************************ 00:03:00.756 21:44:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:00.756 END TEST asan 00:03:00.756 ************************************ 00:03:00.756 21:44:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:00.756 21:44:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:00.756 21:44:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:03:00.756 21:44:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.756 21:44:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.756 ************************************ 00:03:00.756 START TEST ubsan 00:03:00.756 ************************************ 00:03:00.756 using ubsan 00:03:00.756 21:44:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:03:00.756 00:03:00.756 real 0m0.000s 00:03:00.756 user 0m0.000s 00:03:00.756 sys 0m0.000s 00:03:00.756 21:44:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:00.756 21:44:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:00.756 ************************************ 00:03:00.756 END TEST ubsan 00:03:00.756 ************************************ 00:03:00.756 21:44:45 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:03:00.756 21:44:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:00.756 21:44:45 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:00.756 21:44:45 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:03:00.756 21:44:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:03:00.756 21:44:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:00.756 ************************************ 00:03:00.756 START TEST build_native_dpdk 00:03:00.756 ************************************ 00:03:00.756 21:44:45 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:00.756 bf0ff8df59 maintainers: fix prog guide paths 00:03:00.756 41dd9a6bc2 doc: reorganize prog guide 00:03:00.756 cb9187bc5c version: 24.11-rc0 00:03:00.756 b3485f4293 version: 24.07.0 00:03:00.756 fa58aec335 doc: add tested platforms with NVIDIA NICs 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc0 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc0 21.11.0 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 21.11.0 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.756 21:44:45 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:00.756 21:44:45 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:03:00.756 patching file config/rte_config.h 00:03:00.756 Hunk #1 succeeded at 70 (offset 11 lines). 00:03:00.757 21:44:45 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc0 24.07.0 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc0 '<' 24.07.0 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:00.757 21:44:45 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc0 24.07.0 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc0 '>=' 24.07.0 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:03:00.757 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:03:01.017 21:44:45 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:03:01.017 21:44:45 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:03:01.017 21:44:45 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:03:01.017 21:44:45 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:03:01.017 21:44:45 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:01.017 21:44:45 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:03:01.017 21:44:45 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:01.017 patching file drivers/bus/pci/linux/pci_uio.c 00:03:01.017 21:44:45 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:03:01.017 21:44:45 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:03:01.017 21:44:45 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:03:01.017 21:44:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:03:01.017 21:44:45 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:05.211 The Meson build system 00:03:05.211 Version: 1.5.0 00:03:05.211 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:05.211 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:05.211 Build type: native build 00:03:05.211 Program cat found: YES (/usr/bin/cat) 00:03:05.211 Project name: DPDK 00:03:05.211 Project version: 24.11.0-rc0 00:03:05.211 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:05.211 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:05.211 Host machine cpu family: x86_64 00:03:05.211 Host machine cpu: x86_64 00:03:05.211 Message: ## Building in Developer Mode ## 00:03:05.211 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:05.211 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:05.211 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:05.211 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:03:05.211 Program cat found: YES (/usr/bin/cat) 00:03:05.211 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:05.211 Compiler for C supports arguments -march=native: YES 00:03:05.211 Checking for size of "void *" : 8 00:03:05.211 Checking for size of "void *" : 8 (cached) 00:03:05.211 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:05.211 Library m found: YES 00:03:05.211 Library numa found: YES 00:03:05.211 Has header "numaif.h" : YES 00:03:05.211 Library fdt found: NO 00:03:05.211 Library execinfo found: NO 00:03:05.211 Has header "execinfo.h" : YES 00:03:05.211 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:05.211 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:05.212 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:05.212 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:05.212 Run-time dependency openssl found: YES 3.1.1 00:03:05.212 Run-time dependency libpcap found: YES 1.10.4 00:03:05.212 Has header "pcap.h" with dependency libpcap: YES 00:03:05.212 Compiler for C supports arguments -Wcast-qual: YES 00:03:05.212 Compiler for C supports arguments -Wdeprecated: YES 00:03:05.212 Compiler for C supports arguments -Wformat: YES 00:03:05.212 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:05.212 Compiler for C supports arguments -Wformat-security: NO 00:03:05.212 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:05.212 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:05.212 Compiler for C supports arguments -Wnested-externs: YES 00:03:05.212 Compiler for C supports arguments -Wold-style-definition: YES 00:03:05.212 Compiler for C supports arguments -Wpointer-arith: YES 00:03:05.212 Compiler for C supports arguments -Wsign-compare: YES 00:03:05.212 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:05.212 Compiler for C supports arguments -Wundef: YES 00:03:05.212 Compiler for C supports arguments -Wwrite-strings: YES 00:03:05.212 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:05.212 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:05.212 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:05.212 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:05.212 Program objdump found: YES (/usr/bin/objdump) 00:03:05.212 Compiler for C supports arguments -mavx512f: YES 00:03:05.212 Checking if "AVX512 checking" compiles: YES 00:03:05.212 Fetching value of define "__SSE4_2__" : 1 00:03:05.212 Fetching value of define "__AES__" : 1 00:03:05.212 Fetching value of define "__AVX__" : 1 00:03:05.212 Fetching value of define "__AVX2__" : 1 00:03:05.212 Fetching value of define "__AVX512BW__" : 1 00:03:05.212 Fetching value of define "__AVX512CD__" : 1 00:03:05.212 Fetching value of define "__AVX512DQ__" : 1 00:03:05.212 Fetching value of define "__AVX512F__" : 1 00:03:05.212 Fetching value of define "__AVX512VL__" : 1 00:03:05.212 Fetching value of define "__PCLMUL__" : 1 00:03:05.212 Fetching value of define "__RDRND__" : 1 00:03:05.212 Fetching value of define "__RDSEED__" : 1 00:03:05.212 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:05.212 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:05.212 Message: lib/log: Defining dependency "log" 00:03:05.212 Message: lib/kvargs: Defining dependency "kvargs" 00:03:05.212 Message: lib/argparse: Defining dependency "argparse" 00:03:05.212 Message: lib/telemetry: Defining dependency "telemetry" 00:03:05.212 Checking for function "getentropy" : NO 00:03:05.212 Message: lib/eal: Defining dependency "eal" 00:03:05.212 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:03:05.212 Message: lib/ring: Defining dependency "ring" 00:03:05.212 Message: lib/rcu: Defining dependency "rcu" 00:03:05.212 Message: lib/mempool: Defining dependency "mempool" 00:03:05.212 Message: lib/mbuf: Defining dependency "mbuf" 00:03:05.212 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:05.212 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:05.212 Compiler for C supports arguments -mpclmul: YES 00:03:05.212 Compiler for C supports arguments -maes: YES 00:03:05.212 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:05.212 Compiler for C supports arguments -mavx512bw: YES 00:03:05.212 Compiler for C supports arguments -mavx512dq: YES 00:03:05.212 Compiler for C supports arguments -mavx512vl: YES 00:03:05.212 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:05.212 Compiler for C supports arguments -mavx2: YES 00:03:05.212 Compiler for C supports arguments -mavx: YES 00:03:05.212 Message: lib/net: Defining dependency "net" 00:03:05.212 Message: lib/meter: Defining dependency "meter" 00:03:05.212 Message: lib/ethdev: Defining dependency "ethdev" 00:03:05.212 Message: lib/pci: Defining dependency "pci" 00:03:05.212 Message: lib/cmdline: Defining dependency "cmdline" 00:03:05.212 Message: lib/metrics: Defining dependency "metrics" 00:03:05.212 Message: lib/hash: Defining dependency "hash" 00:03:05.212 Message: lib/timer: Defining dependency "timer" 00:03:05.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:05.212 Message: lib/acl: Defining dependency "acl" 00:03:05.212 Message: lib/bbdev: Defining dependency "bbdev" 00:03:05.212 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:05.212 Run-time dependency libelf found: YES 0.191 00:03:05.212 Message: lib/bpf: Defining dependency "bpf" 00:03:05.212 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:05.212 Message: lib/compressdev: Defining dependency "compressdev" 00:03:05.212 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:05.212 Message: lib/distributor: Defining dependency "distributor" 00:03:05.212 Message: lib/dmadev: Defining dependency "dmadev" 00:03:05.212 Message: lib/efd: Defining dependency "efd" 00:03:05.212 Message: lib/eventdev: Defining dependency "eventdev" 00:03:05.212 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:05.212 Message: lib/gpudev: Defining dependency "gpudev" 00:03:05.212 Message: lib/gro: Defining dependency "gro" 00:03:05.212 Message: lib/gso: Defining dependency "gso" 00:03:05.212 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:05.212 Message: lib/jobstats: Defining dependency "jobstats" 00:03:05.212 Message: lib/latencystats: Defining dependency "latencystats" 00:03:05.212 Message: lib/lpm: Defining dependency "lpm" 00:03:05.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512IFMA__" : 1 00:03:05.212 Message: lib/member: Defining dependency "member" 00:03:05.212 Message: lib/pcapng: Defining dependency "pcapng" 00:03:05.212 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:05.212 Message: lib/power: Defining dependency "power" 00:03:05.212 Message: lib/rawdev: Defining dependency "rawdev" 00:03:05.212 Message: lib/regexdev: Defining dependency "regexdev" 00:03:05.212 Message: lib/mldev: Defining dependency "mldev" 00:03:05.212 Message: lib/rib: Defining dependency "rib" 00:03:05.212 Message: lib/reorder: Defining dependency "reorder" 00:03:05.212 Message: lib/sched: Defining dependency "sched" 00:03:05.212 Message: lib/security: Defining dependency "security" 00:03:05.212 Message: lib/stack: Defining dependency "stack" 00:03:05.212 Has header "linux/userfaultfd.h" : YES 00:03:05.212 Has header "linux/vduse.h" : YES 00:03:05.212 Message: lib/vhost: Defining dependency "vhost" 00:03:05.212 Message: lib/ipsec: Defining dependency "ipsec" 00:03:05.212 Message: lib/pdcp: Defining dependency "pdcp" 00:03:05.212 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:05.212 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:05.212 Message: lib/fib: Defining dependency "fib" 00:03:05.212 Message: lib/port: Defining dependency "port" 00:03:05.212 Message: lib/pdump: Defining dependency "pdump" 00:03:05.212 Message: lib/table: Defining dependency "table" 00:03:05.212 Message: lib/pipeline: Defining dependency "pipeline" 00:03:05.212 Message: lib/graph: Defining dependency "graph" 00:03:05.212 Message: lib/node: Defining dependency "node" 00:03:05.212 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:05.212 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:05.212 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:05.212 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:06.600 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:06.600 Compiler for C supports arguments -Wno-unused-value: YES 00:03:06.600 Compiler for C supports arguments -Wno-format: YES 00:03:06.600 Compiler for C supports arguments -Wno-format-security: YES 00:03:06.600 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:06.600 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:06.600 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:06.600 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:06.600 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:06.600 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:06.600 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:06.600 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:06.600 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:06.600 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:06.600 Has header "sys/epoll.h" : YES 00:03:06.600 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:06.600 Configuring doxy-api-html.conf using configuration 00:03:06.600 Configuring doxy-api-man.conf using configuration 00:03:06.600 Program mandb found: YES (/usr/bin/mandb) 00:03:06.600 Program sphinx-build found: NO 00:03:06.600 Configuring rte_build_config.h using configuration 00:03:06.600 Message: 00:03:06.600 ================= 00:03:06.600 Applications Enabled 00:03:06.600 ================= 00:03:06.600 00:03:06.600 apps: 00:03:06.600 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:06.600 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:06.600 test-pmd, test-regex, test-sad, test-security-perf, 00:03:06.600 00:03:06.600 Message: 00:03:06.600 ================= 00:03:06.600 Libraries Enabled 00:03:06.600 ================= 00:03:06.600 00:03:06.600 libs: 00:03:06.600 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:03:06.600 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:03:06.600 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:03:06.600 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:03:06.600 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:03:06.600 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:03:06.600 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:03:06.600 graph, node, 00:03:06.600 00:03:06.600 Message: 00:03:06.600 =============== 00:03:06.600 Drivers Enabled 00:03:06.600 =============== 00:03:06.600 00:03:06.600 common: 00:03:06.600 00:03:06.600 bus: 00:03:06.600 pci, vdev, 00:03:06.600 mempool: 00:03:06.600 ring, 00:03:06.600 dma: 00:03:06.600 00:03:06.600 net: 00:03:06.600 i40e, 00:03:06.600 raw: 00:03:06.600 00:03:06.600 crypto: 00:03:06.600 00:03:06.600 compress: 00:03:06.600 00:03:06.600 regex: 00:03:06.600 00:03:06.600 ml: 00:03:06.600 00:03:06.600 vdpa: 00:03:06.600 00:03:06.600 event: 00:03:06.600 00:03:06.600 baseband: 00:03:06.600 00:03:06.600 gpu: 00:03:06.600 00:03:06.600 00:03:06.600 Message: 00:03:06.601 ================= 00:03:06.601 Content Skipped 00:03:06.601 ================= 00:03:06.601 00:03:06.601 apps: 00:03:06.601 00:03:06.601 libs: 00:03:06.601 00:03:06.601 drivers: 00:03:06.601 common/cpt: not in enabled drivers build config 00:03:06.601 common/dpaax: not in enabled drivers build config 00:03:06.601 common/iavf: not in enabled drivers build config 00:03:06.601 common/idpf: not in enabled drivers build config 00:03:06.601 common/ionic: not in enabled drivers build config 00:03:06.601 common/mvep: not in enabled drivers build config 00:03:06.601 common/octeontx: not in enabled drivers build config 00:03:06.601 bus/auxiliary: not in enabled drivers build config 00:03:06.601 bus/cdx: not in enabled drivers build config 00:03:06.601 bus/dpaa: not in enabled drivers build config 00:03:06.601 bus/fslmc: not in enabled drivers build config 00:03:06.601 bus/ifpga: not in enabled drivers build config 00:03:06.601 bus/platform: not in enabled drivers build config 00:03:06.601 bus/uacce: not in enabled drivers build config 00:03:06.601 bus/vmbus: not in enabled drivers build config 00:03:06.601 common/cnxk: not in enabled drivers build config 00:03:06.601 common/mlx5: not in enabled drivers build config 00:03:06.601 common/nfp: not in enabled drivers build config 00:03:06.601 common/nitrox: not in enabled drivers build config 00:03:06.601 common/qat: not in enabled drivers build config 00:03:06.601 common/sfc_efx: not in enabled drivers build config 00:03:06.601 mempool/bucket: not in enabled drivers build config 00:03:06.601 mempool/cnxk: not in enabled drivers build config 00:03:06.601 mempool/dpaa: not in enabled drivers build config 00:03:06.601 mempool/dpaa2: not in enabled drivers build config 00:03:06.601 mempool/octeontx: not in enabled drivers build config 00:03:06.601 mempool/stack: not in enabled drivers build config 00:03:06.601 dma/cnxk: not in enabled drivers build config 00:03:06.601 dma/dpaa: not in enabled drivers build config 00:03:06.601 dma/dpaa2: not in enabled drivers build config 00:03:06.601 dma/hisilicon: not in enabled drivers build config 00:03:06.601 dma/idxd: not in enabled drivers build config 00:03:06.601 dma/ioat: not in enabled drivers build config 00:03:06.601 dma/odm: not in enabled drivers build config 00:03:06.601 dma/skeleton: not in enabled drivers build config 00:03:06.601 net/af_packet: not in enabled drivers build config 00:03:06.601 net/af_xdp: not in enabled drivers build config 00:03:06.601 net/ark: not in enabled drivers build config 00:03:06.601 net/atlantic: not in enabled drivers build config 00:03:06.601 net/avp: not in enabled drivers build config 00:03:06.601 net/axgbe: not in enabled drivers build config 00:03:06.601 net/bnx2x: not in enabled drivers build config 00:03:06.601 net/bnxt: not in enabled drivers build config 00:03:06.601 net/bonding: not in enabled drivers build config 00:03:06.601 net/cnxk: not in enabled drivers build config 00:03:06.601 net/cpfl: not in enabled drivers build config 00:03:06.601 net/cxgbe: not in enabled drivers build config 00:03:06.601 net/dpaa: not in enabled drivers build config 00:03:06.601 net/dpaa2: not in enabled drivers build config 00:03:06.601 net/e1000: not in enabled drivers build config 00:03:06.601 net/ena: not in enabled drivers build config 00:03:06.601 net/enetc: not in enabled drivers build config 00:03:06.601 net/enetfec: not in enabled drivers build config 00:03:06.601 net/enic: not in enabled drivers build config 00:03:06.601 net/failsafe: not in enabled drivers build config 00:03:06.601 net/fm10k: not in enabled drivers build config 00:03:06.601 net/gve: not in enabled drivers build config 00:03:06.601 net/hinic: not in enabled drivers build config 00:03:06.601 net/hns3: not in enabled drivers build config 00:03:06.601 net/iavf: not in enabled drivers build config 00:03:06.601 net/ice: not in enabled drivers build config 00:03:06.601 net/idpf: not in enabled drivers build config 00:03:06.601 net/igc: not in enabled drivers build config 00:03:06.601 net/ionic: not in enabled drivers build config 00:03:06.601 net/ipn3ke: not in enabled drivers build config 00:03:06.601 net/ixgbe: not in enabled drivers build config 00:03:06.601 net/mana: not in enabled drivers build config 00:03:06.601 net/memif: not in enabled drivers build config 00:03:06.601 net/mlx4: not in enabled drivers build config 00:03:06.601 net/mlx5: not in enabled drivers build config 00:03:06.601 net/mvneta: not in enabled drivers build config 00:03:06.601 net/mvpp2: not in enabled drivers build config 00:03:06.601 net/netvsc: not in enabled drivers build config 00:03:06.601 net/nfb: not in enabled drivers build config 00:03:06.601 net/nfp: not in enabled drivers build config 00:03:06.601 net/ngbe: not in enabled drivers build config 00:03:06.601 net/ntnic: not in enabled drivers build config 00:03:06.601 net/null: not in enabled drivers build config 00:03:06.601 net/octeontx: not in enabled drivers build config 00:03:06.601 net/octeon_ep: not in enabled drivers build config 00:03:06.601 net/pcap: not in enabled drivers build config 00:03:06.601 net/pfe: not in enabled drivers build config 00:03:06.601 net/qede: not in enabled drivers build config 00:03:06.601 net/ring: not in enabled drivers build config 00:03:06.601 net/sfc: not in enabled drivers build config 00:03:06.601 net/softnic: not in enabled drivers build config 00:03:06.601 net/tap: not in enabled drivers build config 00:03:06.601 net/thunderx: not in enabled drivers build config 00:03:06.601 net/txgbe: not in enabled drivers build config 00:03:06.601 net/vdev_netvsc: not in enabled drivers build config 00:03:06.601 net/vhost: not in enabled drivers build config 00:03:06.601 net/virtio: not in enabled drivers build config 00:03:06.601 net/vmxnet3: not in enabled drivers build config 00:03:06.601 raw/cnxk_bphy: not in enabled drivers build config 00:03:06.601 raw/cnxk_gpio: not in enabled drivers build config 00:03:06.601 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:06.601 raw/ifpga: not in enabled drivers build config 00:03:06.601 raw/ntb: not in enabled drivers build config 00:03:06.601 raw/skeleton: not in enabled drivers build config 00:03:06.601 crypto/armv8: not in enabled drivers build config 00:03:06.601 crypto/bcmfs: not in enabled drivers build config 00:03:06.601 crypto/caam_jr: not in enabled drivers build config 00:03:06.601 crypto/ccp: not in enabled drivers build config 00:03:06.601 crypto/cnxk: not in enabled drivers build config 00:03:06.601 crypto/dpaa_sec: not in enabled drivers build config 00:03:06.601 crypto/dpaa2_sec: not in enabled drivers build config 00:03:06.601 crypto/ionic: not in enabled drivers build config 00:03:06.601 crypto/ipsec_mb: not in enabled drivers build config 00:03:06.601 crypto/mlx5: not in enabled drivers build config 00:03:06.601 crypto/mvsam: not in enabled drivers build config 00:03:06.601 crypto/nitrox: not in enabled drivers build config 00:03:06.601 crypto/null: not in enabled drivers build config 00:03:06.601 crypto/octeontx: not in enabled drivers build config 00:03:06.601 crypto/openssl: not in enabled drivers build config 00:03:06.601 crypto/scheduler: not in enabled drivers build config 00:03:06.601 crypto/uadk: not in enabled drivers build config 00:03:06.601 crypto/virtio: not in enabled drivers build config 00:03:06.601 compress/isal: not in enabled drivers build config 00:03:06.601 compress/mlx5: not in enabled drivers build config 00:03:06.601 compress/nitrox: not in enabled drivers build config 00:03:06.601 compress/octeontx: not in enabled drivers build config 00:03:06.601 compress/uadk: not in enabled drivers build config 00:03:06.601 compress/zlib: not in enabled drivers build config 00:03:06.601 regex/mlx5: not in enabled drivers build config 00:03:06.601 regex/cn9k: not in enabled drivers build config 00:03:06.601 ml/cnxk: not in enabled drivers build config 00:03:06.601 vdpa/ifc: not in enabled drivers build config 00:03:06.601 vdpa/mlx5: not in enabled drivers build config 00:03:06.601 vdpa/nfp: not in enabled drivers build config 00:03:06.601 vdpa/sfc: not in enabled drivers build config 00:03:06.601 event/cnxk: not in enabled drivers build config 00:03:06.601 event/dlb2: not in enabled drivers build config 00:03:06.601 event/dpaa: not in enabled drivers build config 00:03:06.601 event/dpaa2: not in enabled drivers build config 00:03:06.601 event/dsw: not in enabled drivers build config 00:03:06.601 event/opdl: not in enabled drivers build config 00:03:06.601 event/skeleton: not in enabled drivers build config 00:03:06.601 event/sw: not in enabled drivers build config 00:03:06.601 event/octeontx: not in enabled drivers build config 00:03:06.601 baseband/acc: not in enabled drivers build config 00:03:06.601 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:06.601 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:06.601 baseband/la12xx: not in enabled drivers build config 00:03:06.601 baseband/null: not in enabled drivers build config 00:03:06.601 baseband/turbo_sw: not in enabled drivers build config 00:03:06.601 gpu/cuda: not in enabled drivers build config 00:03:06.601 00:03:06.601 00:03:06.601 Build targets in project: 219 00:03:06.601 00:03:06.601 DPDK 24.11.0-rc0 00:03:06.601 00:03:06.601 User defined options 00:03:06.601 libdir : lib 00:03:06.601 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:06.601 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:06.601 c_link_args : 00:03:06.601 enable_docs : false 00:03:06.601 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:06.601 enable_kmods : false 00:03:06.601 machine : native 00:03:06.601 tests : false 00:03:06.601 00:03:06.601 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:06.601 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:06.601 21:44:51 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:06.601 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:06.601 [1/718] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:06.601 [2/718] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:06.601 [3/718] Linking static target lib/librte_kvargs.a 00:03:06.601 [4/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:06.601 [5/718] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:06.601 [6/718] Linking static target lib/librte_log.a 00:03:06.601 [7/718] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:03:06.601 [8/718] Linking static target lib/librte_argparse.a 00:03:06.861 [9/718] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.861 [10/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:06.861 [11/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:06.861 [12/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:06.861 [13/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:06.861 [14/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:06.861 [15/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:06.861 [16/718] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.861 [17/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:07.122 [18/718] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.122 [19/718] Linking target lib/librte_log.so.25.0 00:03:07.122 [20/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:07.122 [21/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:07.122 [22/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:07.122 [23/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:07.122 [24/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:07.122 [25/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:07.122 [26/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:07.122 [27/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:07.381 [28/718] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:03:07.381 [29/718] Linking target lib/librte_kvargs.so.25.0 00:03:07.381 [30/718] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:07.381 [31/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:07.381 [32/718] Linking static target lib/librte_telemetry.a 00:03:07.381 [33/718] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:03:07.381 [34/718] Linking target lib/librte_argparse.so.25.0 00:03:07.381 [35/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.381 [36/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:07.381 [37/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:07.381 [38/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.640 [39/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:07.641 [40/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:07.641 [41/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.641 [42/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.641 [43/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.641 [44/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.641 [45/718] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.641 [46/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:07.641 [47/718] Linking target lib/librte_telemetry.so.25.0 00:03:07.900 [48/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:07.900 [49/718] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:03:07.900 [50/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:07.900 [51/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:07.900 [52/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.158 [53/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:08.158 [54/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.158 [55/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.158 [56/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:08.158 [57/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:08.158 [58/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.158 [59/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.158 [60/718] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.158 [61/718] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.158 [62/718] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.159 [63/718] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.416 [64/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.416 [65/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.416 [66/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.416 [67/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.416 [68/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.416 [69/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.416 [70/718] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.416 [71/718] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.674 [72/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.674 [73/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.674 [74/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.674 [75/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.674 [76/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.674 [77/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.674 [78/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.674 [79/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.674 [80/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.674 [81/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:08.931 [82/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:08.931 [83/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:08.931 [84/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:03:08.931 [85/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:08.931 [86/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.931 [87/718] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.188 [88/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.188 [89/718] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.188 [90/718] Linking static target lib/librte_ring.a 00:03:09.188 [91/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.188 [92/718] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.188 [93/718] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.188 [94/718] Linking static target lib/librte_eal.a 00:03:09.445 [95/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.445 [96/718] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.445 [97/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.445 [98/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.445 [99/718] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.445 [100/718] Linking static target lib/librte_mempool.a 00:03:09.445 [101/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:09.445 [102/718] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.702 [103/718] Linking static target lib/librte_rcu.a 00:03:09.702 [104/718] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:09.702 [105/718] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:09.702 [106/718] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:09.702 [107/718] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:09.702 [108/718] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:09.702 [109/718] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [110/718] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:09.960 [111/718] Linking static target lib/librte_meter.a 00:03:09.960 [112/718] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.960 [113/718] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:09.960 [114/718] Linking static target lib/librte_net.a 00:03:09.960 [115/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:09.960 [116/718] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.217 [117/718] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.217 [118/718] Linking static target lib/librte_mbuf.a 00:03:10.217 [119/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.217 [120/718] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.217 [121/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:10.217 [122/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.474 [123/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:10.474 [124/718] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.731 [125/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:10.988 [126/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:10.988 [127/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:10.988 [128/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:10.988 [129/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:10.988 [130/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:10.988 [131/718] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:10.988 [132/718] Linking static target lib/librte_pci.a 00:03:10.988 [133/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:10.988 [134/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:11.246 [135/718] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.246 [136/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:11.246 [137/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:11.246 [138/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:11.246 [139/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:11.246 [140/718] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:11.246 [141/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:11.246 [142/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:11.246 [143/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:11.246 [144/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:11.246 [145/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:11.246 [146/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:11.504 [147/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:11.504 [148/718] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:11.504 [149/718] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:11.504 [150/718] Linking static target lib/librte_cmdline.a 00:03:11.504 [151/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:11.504 [152/718] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:11.761 [153/718] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:11.761 [154/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:11.761 [155/718] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:11.761 [156/718] Linking static target lib/librte_metrics.a 00:03:11.761 [157/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:12.019 [158/718] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.019 [159/718] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:12.019 [160/718] Linking static target lib/librte_timer.a 00:03:12.019 [161/718] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.276 [162/718] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:12.276 [163/718] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:12.276 [164/718] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.569 [165/718] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:12.569 [166/718] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:12.569 [167/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:12.826 [168/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:12.826 [169/718] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:12.826 [170/718] Linking static target lib/librte_bitratestats.a 00:03:12.826 [171/718] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.826 [172/718] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:13.084 [173/718] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:13.084 [174/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:13.084 [175/718] Linking static target lib/librte_bbdev.a 00:03:13.342 [176/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:13.342 [177/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:13.342 [178/718] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:13.342 [179/718] Linking static target lib/librte_hash.a 00:03:13.342 [180/718] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:13.342 [181/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:13.342 [182/718] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:13.342 [183/718] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.342 [184/718] Linking static target lib/librte_ethdev.a 00:03:13.342 [185/718] Linking static target lib/acl/libavx2_tmp.a 00:03:13.600 [186/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:13.600 [187/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:13.600 [188/718] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:13.600 [189/718] Linking static target lib/librte_cfgfile.a 00:03:13.858 [190/718] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.858 [191/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:13.858 [192/718] Linking target lib/librte_eal.so.25.0 00:03:13.858 [193/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:13.858 [194/718] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.858 [195/718] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:13.858 [196/718] Linking target lib/librte_ring.so.25.0 00:03:13.858 [197/718] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.116 [198/718] Linking target lib/librte_meter.so.25.0 00:03:14.116 [199/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:14.116 [200/718] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:14.116 [201/718] Linking target lib/librte_rcu.so.25.0 00:03:14.116 [202/718] Linking target lib/librte_pci.so.25.0 00:03:14.116 [203/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:14.116 [204/718] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:14.116 [205/718] Linking target lib/librte_mempool.so.25.0 00:03:14.116 [206/718] Linking target lib/librte_timer.so.25.0 00:03:14.116 [207/718] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:14.116 [208/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:14.116 [209/718] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:14.116 [210/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:14.116 [211/718] Linking target lib/librte_cfgfile.so.25.0 00:03:14.116 [212/718] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:14.116 [213/718] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:14.116 [214/718] Linking static target lib/librte_bpf.a 00:03:14.116 [215/718] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:14.116 [216/718] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:14.116 [217/718] Linking static target lib/librte_compressdev.a 00:03:14.116 [218/718] Linking target lib/librte_mbuf.so.25.0 00:03:14.382 [219/718] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:14.382 [220/718] Linking static target lib/librte_acl.a 00:03:14.383 [221/718] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:14.383 [222/718] Linking target lib/librte_net.so.25.0 00:03:14.383 [223/718] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.383 [224/718] Linking target lib/librte_bbdev.so.25.0 00:03:14.383 [225/718] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:14.646 [226/718] Linking target lib/librte_cmdline.so.25.0 00:03:14.646 [227/718] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.646 [228/718] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.646 [229/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:14.646 [230/718] Linking target lib/librte_hash.so.25.0 00:03:14.646 [231/718] Linking target lib/librte_compressdev.so.25.0 00:03:14.646 [232/718] Linking target lib/librte_acl.so.25.0 00:03:14.646 [233/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:14.646 [234/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:14.646 [235/718] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:14.646 [236/718] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:14.646 [237/718] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:14.646 [238/718] Linking static target lib/librte_distributor.a 00:03:14.904 [239/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:14.904 [240/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:14.904 [241/718] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.904 [242/718] Linking target lib/librte_distributor.so.25.0 00:03:14.904 [243/718] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:14.904 [244/718] Linking static target lib/librte_dmadev.a 00:03:15.162 [245/718] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:15.419 [246/718] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.419 [247/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:15.419 [248/718] Linking target lib/librte_dmadev.so.25.0 00:03:15.419 [249/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:15.419 [250/718] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:15.419 [251/718] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:15.419 [252/718] Linking static target lib/librte_efd.a 00:03:15.677 [253/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:15.677 [254/718] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.677 [255/718] Linking target lib/librte_efd.so.25.0 00:03:15.677 [256/718] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:15.677 [257/718] Linking static target lib/librte_cryptodev.a 00:03:15.677 [258/718] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:15.935 [259/718] Linking static target lib/librte_dispatcher.a 00:03:15.935 [260/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:15.935 [261/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:15.935 [262/718] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:15.935 [263/718] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:16.207 [264/718] Linking static target lib/librte_gpudev.a 00:03:16.208 [265/718] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.208 [266/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:16.208 [267/718] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:16.465 [268/718] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:16.465 [269/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:16.465 [270/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:16.465 [271/718] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:16.724 [272/718] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.724 [273/718] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.724 [274/718] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:16.724 [275/718] Linking target lib/librte_gpudev.so.25.0 00:03:16.724 [276/718] Linking target lib/librte_cryptodev.so.25.0 00:03:16.724 [277/718] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:16.724 [278/718] Linking static target lib/librte_gro.a 00:03:16.724 [279/718] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:16.724 [280/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:16.724 [281/718] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:16.724 [282/718] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:16.724 [283/718] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:16.724 [284/718] Linking static target lib/librte_eventdev.a 00:03:16.983 [285/718] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:16.983 [286/718] Linking static target lib/librte_gso.a 00:03:16.983 [287/718] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.983 [288/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:16.983 [289/718] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.983 [290/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:16.983 [291/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:16.983 [292/718] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.241 [293/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:17.241 [294/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:17.241 [295/718] Linking target lib/librte_ethdev.so.25.0 00:03:17.241 [296/718] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:17.241 [297/718] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:17.241 [298/718] Linking static target lib/librte_jobstats.a 00:03:17.241 [299/718] Linking static target lib/librte_ip_frag.a 00:03:17.241 [300/718] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:17.241 [301/718] Linking target lib/librte_metrics.so.25.0 00:03:17.241 [302/718] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:17.499 [303/718] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:17.499 [304/718] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:17.499 [305/718] Linking target lib/librte_bitratestats.so.25.0 00:03:17.499 [306/718] Linking target lib/librte_bpf.so.25.0 00:03:17.499 [307/718] Linking target lib/librte_gro.so.25.0 00:03:17.499 [308/718] Linking static target lib/librte_latencystats.a 00:03:17.499 [309/718] Linking target lib/librte_gso.so.25.0 00:03:17.499 [310/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:17.499 [311/718] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.499 [312/718] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.499 [313/718] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:17.499 [314/718] Linking target lib/librte_jobstats.so.25.0 00:03:17.499 [315/718] Linking target lib/librte_ip_frag.so.25.0 00:03:17.499 [316/718] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:17.499 [317/718] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.758 [318/718] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:17.758 [319/718] Linking target lib/librte_latencystats.so.25.0 00:03:17.758 [320/718] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.758 [321/718] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:17.758 [322/718] Linking static target lib/librte_lpm.a 00:03:17.758 [323/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:17.758 [324/718] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:17.758 [325/718] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:18.016 [326/718] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:18.016 [327/718] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.016 [328/718] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:18.016 [329/718] Linking static target lib/librte_pcapng.a 00:03:18.016 [330/718] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:18.016 [331/718] Linking target lib/librte_lpm.so.25.0 00:03:18.016 [332/718] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:18.274 [333/718] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:18.274 [334/718] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:18.274 [335/718] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.274 [336/718] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:18.274 [337/718] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:18.274 [338/718] Linking target lib/librte_pcapng.so.25.0 00:03:18.274 [339/718] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:18.274 [340/718] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.274 [341/718] Linking target lib/librte_eventdev.so.25.0 00:03:18.274 [342/718] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:18.274 [343/718] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:18.531 [344/718] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:18.531 [345/718] Linking target lib/librte_dispatcher.so.25.0 00:03:18.531 [346/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:18.531 [347/718] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:18.531 [348/718] Linking static target lib/librte_power.a 00:03:18.531 [349/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:18.531 [350/718] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:18.531 [351/718] Linking static target lib/librte_member.a 00:03:18.531 [352/718] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:18.531 [353/718] Linking static target lib/librte_rawdev.a 00:03:18.531 [354/718] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:18.531 [355/718] Linking static target lib/librte_regexdev.a 00:03:18.790 [356/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:18.790 [357/718] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:18.790 [358/718] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.790 [359/718] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:18.790 [360/718] Linking static target lib/librte_mldev.a 00:03:18.790 [361/718] Linking target lib/librte_member.so.25.0 00:03:18.790 [362/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:19.048 [363/718] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.048 [364/718] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:19.048 [365/718] Linking target lib/librte_rawdev.so.25.0 00:03:19.048 [366/718] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.048 [367/718] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:19.048 [368/718] Linking static target lib/librte_reorder.a 00:03:19.048 [369/718] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:19.048 [370/718] Linking target lib/librte_power.so.25.0 00:03:19.048 [371/718] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:19.048 [372/718] Linking static target lib/librte_rib.a 00:03:19.048 [373/718] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:19.048 [374/718] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.306 [375/718] Linking target lib/librte_regexdev.so.25.0 00:03:19.306 [376/718] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.306 [377/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:19.306 [378/718] Linking target lib/librte_reorder.so.25.0 00:03:19.306 [379/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:19.306 [380/718] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:19.306 [381/718] Linking static target lib/librte_stack.a 00:03:19.306 [382/718] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:19.306 [383/718] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.564 [384/718] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:19.564 [385/718] Linking static target lib/librte_security.a 00:03:19.564 [386/718] Linking target lib/librte_rib.so.25.0 00:03:19.564 [387/718] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:19.564 [388/718] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.564 [389/718] Linking target lib/librte_stack.so.25.0 00:03:19.564 [390/718] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:19.564 [391/718] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:19.564 [392/718] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:19.823 [393/718] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.823 [394/718] Linking target lib/librte_security.so.25.0 00:03:19.823 [395/718] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:19.823 [396/718] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.823 [397/718] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:19.823 [398/718] Linking target lib/librte_mldev.so.25.0 00:03:19.823 [399/718] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:19.823 [400/718] Linking static target lib/librte_sched.a 00:03:20.124 [401/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:20.124 [402/718] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:20.124 [403/718] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.124 [404/718] Linking target lib/librte_sched.so.25.0 00:03:20.392 [405/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:20.392 [406/718] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:20.392 [407/718] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:20.392 [408/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:20.392 [409/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:20.650 [410/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:20.650 [411/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:20.650 [412/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:20.650 [413/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:20.907 [414/718] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:20.907 [415/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:20.907 [416/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:20.907 [417/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:20.907 [418/718] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:21.165 [419/718] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:21.165 [420/718] Linking static target lib/librte_ipsec.a 00:03:21.165 [421/718] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:21.165 [422/718] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:21.165 [423/718] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.165 [424/718] Linking target lib/librte_ipsec.so.25.0 00:03:21.423 [425/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:21.423 [426/718] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:21.423 [427/718] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:21.423 [428/718] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:21.682 [429/718] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:21.682 [430/718] Linking static target lib/librte_fib.a 00:03:21.682 [431/718] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:21.682 [432/718] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:21.682 [433/718] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:21.682 [434/718] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.682 [435/718] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:21.682 [436/718] Linking target lib/librte_fib.so.25.0 00:03:21.940 [437/718] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:21.940 [438/718] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:21.940 [439/718] Linking static target lib/librte_pdcp.a 00:03:21.940 [440/718] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.940 [441/718] Linking target lib/librte_pdcp.so.25.0 00:03:22.198 [442/718] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:22.198 [443/718] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:22.198 [444/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:22.198 [445/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:22.456 [446/718] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:22.456 [447/718] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:22.456 [448/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:22.456 [449/718] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:22.714 [450/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:22.714 [451/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:22.714 [452/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:22.714 [453/718] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:22.714 [454/718] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:22.714 [455/718] Linking static target lib/librte_port.a 00:03:22.971 [456/718] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:22.971 [457/718] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:22.971 [458/718] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:22.971 [459/718] Linking static target lib/librte_pdump.a 00:03:22.971 [460/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:23.229 [461/718] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.229 [462/718] Linking target lib/librte_pdump.so.25.0 00:03:23.229 [463/718] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.229 [464/718] Linking target lib/librte_port.so.25.0 00:03:23.229 [465/718] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:23.229 [466/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:23.229 [467/718] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:23.229 [468/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:23.487 [469/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:23.487 [470/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:23.487 [471/718] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:23.487 [472/718] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:23.745 [473/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:23.745 [474/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:23.745 [475/718] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:23.745 [476/718] Linking static target lib/librte_table.a 00:03:24.003 [477/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:24.003 [478/718] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:24.003 [479/718] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:24.003 [480/718] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.003 [481/718] Linking target lib/librte_table.so.25.0 00:03:24.262 [482/718] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:24.262 [483/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:24.262 [484/718] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:24.262 [485/718] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:24.262 [486/718] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:24.519 [487/718] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:24.520 [488/718] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:24.520 [489/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:24.776 [490/718] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:24.776 [491/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:25.034 [492/718] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:25.034 [493/718] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:25.034 [494/718] Linking static target lib/librte_graph.a 00:03:25.034 [495/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:25.034 [496/718] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:25.034 [497/718] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:25.291 [498/718] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:25.291 [499/718] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:25.291 [500/718] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.291 [501/718] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:25.549 [502/718] Linking target lib/librte_graph.so.25.0 00:03:25.549 [503/718] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:25.549 [504/718] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:25.549 [505/718] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:25.549 [506/718] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:25.807 [507/718] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:25.807 [508/718] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:25.807 [509/718] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:25.807 [510/718] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:25.807 [511/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:25.807 [512/718] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:26.065 [513/718] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:26.065 [514/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:26.065 [515/718] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:26.065 [516/718] Linking static target lib/librte_node.a 00:03:26.065 [517/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:26.065 [518/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:26.065 [519/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:26.322 [520/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:26.322 [521/718] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.322 [522/718] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:26.322 [523/718] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:26.322 [524/718] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:26.322 [525/718] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:26.322 [526/718] Linking target lib/librte_node.so.25.0 00:03:26.580 [527/718] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:26.580 [528/718] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.580 [529/718] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:26.580 [530/718] Linking static target drivers/librte_bus_vdev.a 00:03:26.580 [531/718] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.580 [532/718] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:26.580 [533/718] Linking static target drivers/librte_bus_pci.a 00:03:26.580 [534/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:26.580 [535/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:26.580 [536/718] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:26.837 [537/718] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.837 [538/718] Linking target drivers/librte_bus_vdev.so.25.0 00:03:26.837 [539/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:26.837 [540/718] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:26.837 [541/718] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:26.837 [542/718] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:26.838 [543/718] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.838 [544/718] Linking target drivers/librte_bus_pci.so.25.0 00:03:27.095 [545/718] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:27.095 [546/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:27.095 [547/718] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:27.095 [548/718] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.095 [549/718] Linking static target drivers/librte_mempool_ring.a 00:03:27.095 [550/718] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:27.095 [551/718] Linking target drivers/librte_mempool_ring.so.25.0 00:03:27.095 [552/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:27.352 [553/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:27.611 [554/718] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:27.611 [555/718] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:27.880 [556/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:28.138 [557/718] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:28.138 [558/718] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:28.138 [559/718] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:28.138 [560/718] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:28.396 [561/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:28.396 [562/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:28.653 [563/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:28.653 [564/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:28.653 [565/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:28.653 [566/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:28.653 [567/718] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:28.910 [568/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:28.910 [569/718] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:29.168 [570/718] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:29.168 [571/718] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:29.427 [572/718] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:29.427 [573/718] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:29.427 [574/718] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:29.427 [575/718] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:29.686 [576/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:29.686 [577/718] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:29.686 [578/718] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:29.686 [579/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:29.686 [580/718] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:29.686 [581/718] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:29.944 [582/718] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:29.944 [583/718] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:29.944 [584/718] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:30.202 [585/718] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:30.202 [586/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:30.202 [587/718] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:30.202 [588/718] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:30.202 [589/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:30.202 [590/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:30.202 [591/718] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:30.202 [592/718] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:30.460 [593/718] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:30.460 [594/718] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:30.460 [595/718] Linking static target drivers/librte_net_i40e.a 00:03:30.460 [596/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:30.460 [597/718] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:30.460 [598/718] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:30.719 [599/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:30.719 [600/718] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.978 [601/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:30.978 [602/718] Linking target drivers/librte_net_i40e.so.25.0 00:03:30.978 [603/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:30.978 [604/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:30.978 [605/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:30.978 [606/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:31.237 [607/718] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:31.237 [608/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:31.495 [609/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:31.495 [610/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:31.495 [611/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:31.495 [612/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:31.753 [613/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:31.753 [614/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:31.753 [615/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:31.753 [616/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:31.753 [617/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:32.011 [618/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:32.011 [619/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:32.011 [620/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:32.011 [621/718] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:32.011 [622/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:32.269 [623/718] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:32.269 [624/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:32.269 [625/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:32.270 [626/718] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:32.528 [627/718] Linking static target lib/librte_vhost.a 00:03:32.528 [628/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:33.096 [629/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:33.096 [630/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:33.096 [631/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:33.096 [632/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:33.096 [633/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:33.096 [634/718] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.096 [635/718] Linking target lib/librte_vhost.so.25.0 00:03:33.354 [636/718] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:33.355 [637/718] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:33.355 [638/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:33.355 [639/718] Linking static target lib/librte_pipeline.a 00:03:33.355 [640/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:33.355 [641/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:33.355 [642/718] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:33.355 [643/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:33.355 [644/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:33.613 [645/718] Linking target app/dpdk-dumpcap 00:03:33.613 [646/718] Linking target app/dpdk-graph 00:03:33.613 [647/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:33.613 [648/718] Linking target app/dpdk-pdump 00:03:33.871 [649/718] Linking target app/dpdk-test-acl 00:03:33.871 [650/718] Linking target app/dpdk-proc-info 00:03:33.871 [651/718] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:33.871 [652/718] Linking target app/dpdk-test-cmdline 00:03:33.871 [653/718] Linking target app/dpdk-test-crypto-perf 00:03:33.871 [654/718] Linking target app/dpdk-test-compress-perf 00:03:33.871 [655/718] Linking target app/dpdk-test-dma-perf 00:03:34.130 [656/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:34.130 [657/718] Linking target app/dpdk-test-fib 00:03:34.130 [658/718] Linking target app/dpdk-test-gpudev 00:03:34.130 [659/718] Linking target app/dpdk-test-flow-perf 00:03:34.130 [660/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:34.130 [661/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:34.130 [662/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:34.398 [663/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:34.398 [664/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:34.398 [665/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:34.398 [666/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:34.658 [667/718] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:34.658 [668/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:34.658 [669/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:34.658 [670/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:34.658 [671/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:34.916 [672/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:34.916 [673/718] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:34.916 [674/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:34.916 [675/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:34.916 [676/718] Linking target app/dpdk-test-eventdev 00:03:34.916 [677/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:35.173 [678/718] Linking target app/dpdk-test-bbdev 00:03:35.173 [679/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:35.173 [680/718] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:35.173 [681/718] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:35.430 [682/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:35.430 [683/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:35.430 [684/718] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.430 [685/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:35.430 [686/718] Linking target app/dpdk-test-pipeline 00:03:35.430 [687/718] Linking target lib/librte_pipeline.so.25.0 00:03:35.687 [688/718] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:35.944 [689/718] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:35.944 [690/718] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:35.944 [691/718] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:35.944 [692/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:35.944 [693/718] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:35.944 [694/718] Linking target app/dpdk-test-mldev 00:03:36.204 [695/718] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:36.204 [696/718] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:36.462 [697/718] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:36.462 [698/718] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:36.462 [699/718] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:36.462 [700/718] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:36.720 [701/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:36.720 [702/718] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:36.978 [703/718] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:36.978 [704/718] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:36.978 [705/718] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:37.236 [706/718] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:37.236 [707/718] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:37.236 [708/718] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:37.236 [709/718] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:37.236 [710/718] Linking target app/dpdk-test-regex 00:03:37.496 [711/718] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:37.496 [712/718] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:37.496 [713/718] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:37.496 [714/718] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:37.496 [715/718] Linking target app/dpdk-test-sad 00:03:37.755 [716/718] Linking target app/dpdk-testpmd 00:03:37.755 [717/718] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:38.014 [718/718] Linking target app/dpdk-test-security-perf 00:03:38.272 21:45:22 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:38.272 21:45:22 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:38.272 21:45:22 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:38.272 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:38.272 [0/1] Installing files. 00:03:38.534 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:38.534 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.534 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.535 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.536 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.537 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:38.538 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:38.538 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.538 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.539 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:38.799 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:38.799 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:38.799 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.799 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:38.799 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.799 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.800 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.801 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:38.802 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:38.802 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:38.802 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:38.802 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:39.061 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:39.061 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:39.061 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:39.061 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:39.061 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:39.061 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:39.061 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:39.061 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:39.061 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:39.061 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:39.061 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:39.061 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:39.061 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:39.061 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:39.061 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:39.061 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:39.061 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:39.061 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:39.061 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:39.061 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:39.061 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:39.061 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:39.061 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:39.061 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:39.061 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:39.061 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:39.061 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:39.061 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:39.061 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:39.061 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:39.061 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:39.061 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:39.061 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:39.061 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:39.061 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:39.061 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:39.061 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:39.061 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:39.061 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:39.061 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:39.061 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:39.061 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:39.061 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:39.061 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:39.061 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:39.061 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:39.061 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:39.061 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:39.061 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:39.061 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:39.061 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:39.061 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:39.061 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:39.061 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:39.061 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:39.061 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:39.061 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:39.061 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:39.061 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:39.061 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:39.061 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:39.061 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:39.061 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:39.061 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:39.061 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:39.061 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:39.061 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:39.061 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:39.061 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:39.061 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:39.061 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:39.061 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:39.061 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:39.061 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:39.061 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:39.061 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:39.061 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:39.061 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:39.061 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:39.061 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:39.061 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:39.061 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:39.061 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:39.061 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:39.061 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:39.061 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:39.061 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:39.061 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:39.061 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:39.061 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:39.061 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:39.061 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:39.061 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:39.061 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:39.061 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:39.061 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:39.061 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:39.061 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:39.061 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:39.061 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:39.062 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:39.062 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:39.062 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:39.062 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:39.062 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:39.062 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:39.062 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:39.062 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:39.062 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:39.062 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:39.062 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:39.062 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:39.062 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:39.062 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:39.062 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:39.062 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:39.062 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:39.062 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:39.062 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:39.062 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:39.062 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:39.062 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:39.062 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:39.062 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:39.062 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:39.062 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:39.062 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:39.062 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:39.062 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:39.062 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:39.062 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:39.062 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:39.062 ************************************ 00:03:39.062 END TEST build_native_dpdk 00:03:39.062 ************************************ 00:03:39.062 21:45:23 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:39.062 21:45:23 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:39.062 00:03:39.062 real 0m38.181s 00:03:39.062 user 4m27.636s 00:03:39.062 sys 0m39.603s 00:03:39.062 21:45:23 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:39.062 21:45:23 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:39.062 21:45:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:39.062 21:45:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:39.062 21:45:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:39.062 21:45:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:39.062 21:45:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:39.062 21:45:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:39.062 21:45:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:39.062 21:45:23 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:39.062 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:39.320 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.320 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:39.320 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:39.577 Using 'verbs' RDMA provider 00:03:50.500 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:02.754 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:02.754 Creating mk/config.mk...done. 00:04:02.754 Creating mk/cc.flags.mk...done. 00:04:02.754 Type 'make' to build. 00:04:02.754 21:45:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:02.754 21:45:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:02.754 21:45:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:02.754 21:45:45 -- common/autotest_common.sh@10 -- $ set +x 00:04:02.754 ************************************ 00:04:02.754 START TEST make 00:04:02.754 ************************************ 00:04:02.754 21:45:46 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:02.754 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:02.754 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:02.754 meson setup builddir \ 00:04:02.754 -Dwith-libaio=enabled \ 00:04:02.754 -Dwith-liburing=enabled \ 00:04:02.754 -Dwith-libvfn=disabled \ 00:04:02.754 -Dwith-spdk=false && \ 00:04:02.754 meson compile -C builddir && \ 00:04:02.754 cd -) 00:04:02.754 make[1]: Nothing to be done for 'all'. 00:04:04.126 The Meson build system 00:04:04.126 Version: 1.5.0 00:04:04.126 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:04.127 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:04.127 Build type: native build 00:04:04.127 Project name: xnvme 00:04:04.127 Project version: 0.7.3 00:04:04.127 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:04.127 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:04.127 Host machine cpu family: x86_64 00:04:04.127 Host machine cpu: x86_64 00:04:04.127 Message: host_machine.system: linux 00:04:04.127 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:04.127 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:04.127 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:04.127 Run-time dependency threads found: YES 00:04:04.127 Has header "setupapi.h" : NO 00:04:04.127 Has header "linux/blkzoned.h" : YES 00:04:04.127 Has header "linux/blkzoned.h" : YES (cached) 00:04:04.127 Has header "libaio.h" : YES 00:04:04.127 Library aio found: YES 00:04:04.127 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:04.127 Run-time dependency liburing found: YES 2.2 00:04:04.127 Dependency libvfn skipped: feature with-libvfn disabled 00:04:04.127 Run-time dependency appleframeworks found: NO (tried framework) 00:04:04.127 Run-time dependency appleframeworks found: NO (tried framework) 00:04:04.127 Configuring xnvme_config.h using configuration 00:04:04.127 Configuring xnvme.spec using configuration 00:04:04.127 Run-time dependency bash-completion found: YES 2.11 00:04:04.127 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:04.127 Program cp found: YES (/usr/bin/cp) 00:04:04.127 Has header "winsock2.h" : NO 00:04:04.127 Has header "dbghelp.h" : NO 00:04:04.127 Library rpcrt4 found: NO 00:04:04.127 Library rt found: YES 00:04:04.127 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:04.127 Found CMake: /usr/bin/cmake (3.27.7) 00:04:04.127 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:04:04.127 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:04:04.127 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:04:04.127 Build targets in project: 32 00:04:04.127 00:04:04.127 xnvme 0.7.3 00:04:04.127 00:04:04.127 User defined options 00:04:04.127 with-libaio : enabled 00:04:04.127 with-liburing: enabled 00:04:04.127 with-libvfn : disabled 00:04:04.127 with-spdk : false 00:04:04.127 00:04:04.127 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:04.383 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:04.383 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:04:04.383 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:04:04.383 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:04:04.383 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:04:04.383 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:04:04.383 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:04:04.383 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:04:04.383 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:04:04.383 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:04:04.383 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:04:04.383 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:04:04.383 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:04:04.641 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:04:04.641 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:04:04.641 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:04:04.641 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:04:04.641 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:04:04.641 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:04:04.641 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:04:04.641 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:04:04.641 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:04:04.641 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:04:04.641 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:04:04.641 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:04:04.641 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:04:04.641 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:04:04.641 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:04:04.641 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:04:04.641 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:04:04.641 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:04:04.641 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:04:04.641 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:04:04.641 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:04:04.641 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:04:04.641 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:04:04.641 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:04:04.641 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:04:04.641 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:04:04.641 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:04:04.641 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:04:04.898 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:04:04.898 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:04:04.898 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:04:04.898 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:04:04.898 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:04:04.898 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:04:04.898 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:04:04.898 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:04:04.898 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:04:04.898 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:04:04.898 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:04:04.898 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:04:04.898 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:04:04.898 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:04:04.898 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:04:04.898 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:04:04.898 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:04:04.898 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:04:04.898 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:04:04.898 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:04:04.898 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:04:04.898 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:04:04.898 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:04:04.898 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:04:04.898 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:04:04.898 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:04:04.898 [67/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:04:04.898 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:04:05.156 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:04:05.156 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:04:05.156 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:04:05.156 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:04:05.156 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:04:05.156 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:04:05.156 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:04:05.156 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:04:05.156 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:04:05.156 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:04:05.156 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:04:05.156 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:04:05.156 [81/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:04:05.156 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:04:05.156 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:04:05.414 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:04:05.414 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:04:05.414 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:04:05.414 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:04:05.414 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:04:05.414 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:04:05.414 [90/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:04:05.414 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:04:05.414 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:04:05.414 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:04:05.414 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:04:05.414 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:04:05.414 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:04:05.414 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:04:05.414 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:04:05.414 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:04:05.414 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:04:05.414 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:04:05.414 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:04:05.414 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:04:05.414 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:04:05.414 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:04:05.414 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:04:05.414 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:04:05.414 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:04:05.414 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:04:05.414 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:04:05.414 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:04:05.414 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:04:05.414 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:04:05.414 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:04:05.414 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:04:05.672 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:04:05.672 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:04:05.672 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:04:05.672 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:04:05.672 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:04:05.672 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:04:05.672 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:04:05.672 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:04:05.672 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:04:05.672 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:04:05.672 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:04:05.672 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:04:05.672 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:04:05.672 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:04:05.672 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:04:05.672 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:04:05.672 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:04:05.672 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:04:05.672 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:04:05.672 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:04:05.672 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:04:05.672 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:04:05.672 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:04:05.672 [139/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:04:05.930 [140/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:04:05.930 [141/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:04:05.930 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:04:05.930 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:04:05.930 [144/203] Linking target lib/libxnvme.so 00:04:05.930 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:04:05.930 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:04:05.930 [147/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:04:05.930 [148/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:04:05.930 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:04:05.930 [150/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:04:05.930 [151/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:04:05.930 [152/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:04:05.930 [153/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:04:05.930 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:04:06.188 [155/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:04:06.188 [156/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:04:06.189 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:04:06.189 [158/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:04:06.189 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:04:06.189 [160/203] Compiling C object tools/xdd.p/xdd.c.o 00:04:06.189 [161/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:04:06.189 [162/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:04:06.189 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:04:06.189 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:04:06.189 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:04:06.189 [166/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:04:06.189 [167/203] Compiling C object tools/kvs.p/kvs.c.o 00:04:06.189 [168/203] Compiling C object tools/zoned.p/zoned.c.o 00:04:06.447 [169/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:04:06.447 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:04:06.447 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:04:06.447 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:04:06.447 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:04:06.447 [174/203] Linking static target lib/libxnvme.a 00:04:06.447 [175/203] Linking target tests/xnvme_tests_async_intf 00:04:06.447 [176/203] Linking target tests/xnvme_tests_cli 00:04:06.447 [177/203] Linking target tests/xnvme_tests_buf 00:04:06.447 [178/203] Linking target tests/xnvme_tests_lblk 00:04:06.447 [179/203] Linking target tests/xnvme_tests_znd_zrwa 00:04:06.447 [180/203] Linking target tests/xnvme_tests_enum 00:04:06.447 [181/203] Linking target tests/xnvme_tests_ioworker 00:04:06.447 [182/203] Linking target tests/xnvme_tests_znd_explicit_open 00:04:06.447 [183/203] Linking target tests/xnvme_tests_znd_append 00:04:06.447 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:04:06.447 [185/203] Linking target tests/xnvme_tests_znd_state 00:04:06.447 [186/203] Linking target tests/xnvme_tests_scc 00:04:06.447 [187/203] Linking target tests/xnvme_tests_xnvme_file 00:04:06.447 [188/203] Linking target tests/xnvme_tests_kvs 00:04:06.447 [189/203] Linking target tests/xnvme_tests_map 00:04:06.447 [190/203] Linking target tools/xdd 00:04:06.447 [191/203] Linking target tools/kvs 00:04:06.447 [192/203] Linking target tools/xnvme_file 00:04:06.705 [193/203] Linking target examples/xnvme_dev 00:04:06.705 [194/203] Linking target tools/zoned 00:04:06.705 [195/203] Linking target examples/zoned_io_async 00:04:06.705 [196/203] Linking target examples/xnvme_enum 00:04:06.705 [197/203] Linking target tools/lblk 00:04:06.705 [198/203] Linking target examples/xnvme_hello 00:04:06.705 [199/203] Linking target examples/xnvme_io_async 00:04:06.705 [200/203] Linking target tools/xnvme 00:04:06.705 [201/203] Linking target examples/xnvme_single_async 00:04:06.705 [202/203] Linking target examples/xnvme_single_sync 00:04:06.705 [203/203] Linking target examples/zoned_io_sync 00:04:06.705 INFO: autodetecting backend as ninja 00:04:06.705 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:06.705 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:38.809 CC lib/log/log.o 00:04:38.809 CC lib/ut/ut.o 00:04:38.809 CC lib/log/log_deprecated.o 00:04:38.809 CC lib/log/log_flags.o 00:04:38.809 CC lib/ut_mock/mock.o 00:04:38.809 LIB libspdk_log.a 00:04:38.809 LIB libspdk_ut.a 00:04:38.809 LIB libspdk_ut_mock.a 00:04:38.809 SO libspdk_ut.so.2.0 00:04:38.809 SO libspdk_log.so.7.0 00:04:38.809 SO libspdk_ut_mock.so.6.0 00:04:38.809 SYMLINK libspdk_ut.so 00:04:38.809 SYMLINK libspdk_log.so 00:04:38.809 SYMLINK libspdk_ut_mock.so 00:04:38.809 CC lib/util/base64.o 00:04:38.809 CC lib/util/bit_array.o 00:04:38.809 CC lib/util/crc16.o 00:04:38.809 CC lib/util/crc32c.o 00:04:38.809 CC lib/ioat/ioat.o 00:04:38.809 CC lib/util/crc32.o 00:04:38.809 CC lib/util/cpuset.o 00:04:38.809 CXX lib/trace_parser/trace.o 00:04:38.809 CC lib/dma/dma.o 00:04:38.809 CC lib/vfio_user/host/vfio_user_pci.o 00:04:38.809 CC lib/util/crc32_ieee.o 00:04:38.809 CC lib/util/crc64.o 00:04:38.809 CC lib/vfio_user/host/vfio_user.o 00:04:38.809 CC lib/util/dif.o 00:04:38.809 LIB libspdk_dma.a 00:04:38.809 CC lib/util/fd.o 00:04:38.809 SO libspdk_dma.so.5.0 00:04:38.809 CC lib/util/fd_group.o 00:04:38.809 CC lib/util/file.o 00:04:38.809 CC lib/util/hexlify.o 00:04:38.809 SYMLINK libspdk_dma.so 00:04:38.809 CC lib/util/iov.o 00:04:38.809 LIB libspdk_ioat.a 00:04:38.809 SO libspdk_ioat.so.7.0 00:04:38.809 CC lib/util/math.o 00:04:38.809 CC lib/util/net.o 00:04:38.809 LIB libspdk_vfio_user.a 00:04:38.809 SYMLINK libspdk_ioat.so 00:04:38.809 CC lib/util/pipe.o 00:04:38.809 SO libspdk_vfio_user.so.5.0 00:04:38.809 CC lib/util/strerror_tls.o 00:04:38.809 CC lib/util/string.o 00:04:38.809 SYMLINK libspdk_vfio_user.so 00:04:38.809 CC lib/util/uuid.o 00:04:38.809 CC lib/util/xor.o 00:04:38.809 CC lib/util/zipf.o 00:04:38.809 CC lib/util/md5.o 00:04:38.809 LIB libspdk_util.a 00:04:38.809 SO libspdk_util.so.10.0 00:04:38.809 LIB libspdk_trace_parser.a 00:04:38.809 SO libspdk_trace_parser.so.6.0 00:04:38.809 SYMLINK libspdk_util.so 00:04:38.809 SYMLINK libspdk_trace_parser.so 00:04:38.809 CC lib/env_dpdk/env.o 00:04:38.809 CC lib/env_dpdk/memory.o 00:04:38.809 CC lib/env_dpdk/pci.o 00:04:38.809 CC lib/vmd/vmd.o 00:04:38.809 CC lib/env_dpdk/init.o 00:04:38.809 CC lib/conf/conf.o 00:04:38.809 CC lib/json/json_parse.o 00:04:38.809 CC lib/rdma_utils/rdma_utils.o 00:04:38.809 CC lib/rdma_provider/common.o 00:04:38.809 CC lib/idxd/idxd.o 00:04:38.809 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:38.809 LIB libspdk_conf.a 00:04:38.809 CC lib/json/json_util.o 00:04:38.809 SO libspdk_conf.so.6.0 00:04:38.809 LIB libspdk_rdma_utils.a 00:04:38.809 SO libspdk_rdma_utils.so.1.0 00:04:38.809 SYMLINK libspdk_conf.so 00:04:38.809 CC lib/env_dpdk/threads.o 00:04:38.809 SYMLINK libspdk_rdma_utils.so 00:04:38.809 CC lib/env_dpdk/pci_ioat.o 00:04:38.809 CC lib/idxd/idxd_user.o 00:04:38.809 LIB libspdk_rdma_provider.a 00:04:38.809 CC lib/idxd/idxd_kernel.o 00:04:38.809 SO libspdk_rdma_provider.so.6.0 00:04:38.809 SYMLINK libspdk_rdma_provider.so 00:04:38.809 CC lib/json/json_write.o 00:04:38.809 CC lib/env_dpdk/pci_virtio.o 00:04:38.809 CC lib/env_dpdk/pci_vmd.o 00:04:38.809 CC lib/env_dpdk/pci_idxd.o 00:04:38.809 CC lib/vmd/led.o 00:04:38.809 CC lib/env_dpdk/pci_event.o 00:04:38.809 CC lib/env_dpdk/sigbus_handler.o 00:04:38.809 CC lib/env_dpdk/pci_dpdk.o 00:04:38.809 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:38.809 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:39.067 LIB libspdk_vmd.a 00:04:39.067 LIB libspdk_idxd.a 00:04:39.068 SO libspdk_vmd.so.6.0 00:04:39.068 LIB libspdk_json.a 00:04:39.068 SO libspdk_idxd.so.12.1 00:04:39.068 SO libspdk_json.so.6.0 00:04:39.068 SYMLINK libspdk_vmd.so 00:04:39.068 SYMLINK libspdk_idxd.so 00:04:39.068 SYMLINK libspdk_json.so 00:04:39.327 CC lib/jsonrpc/jsonrpc_server.o 00:04:39.327 CC lib/jsonrpc/jsonrpc_client.o 00:04:39.327 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:39.327 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:39.586 LIB libspdk_jsonrpc.a 00:04:39.586 SO libspdk_jsonrpc.so.6.0 00:04:39.586 LIB libspdk_env_dpdk.a 00:04:39.845 SYMLINK libspdk_jsonrpc.so 00:04:39.845 SO libspdk_env_dpdk.so.15.0 00:04:39.845 SYMLINK libspdk_env_dpdk.so 00:04:39.845 CC lib/rpc/rpc.o 00:04:40.103 LIB libspdk_rpc.a 00:04:40.103 SO libspdk_rpc.so.6.0 00:04:40.362 SYMLINK libspdk_rpc.so 00:04:40.362 CC lib/keyring/keyring_rpc.o 00:04:40.362 CC lib/keyring/keyring.o 00:04:40.362 CC lib/notify/notify_rpc.o 00:04:40.362 CC lib/notify/notify.o 00:04:40.362 CC lib/trace/trace.o 00:04:40.362 CC lib/trace/trace_rpc.o 00:04:40.362 CC lib/trace/trace_flags.o 00:04:40.620 LIB libspdk_notify.a 00:04:40.620 SO libspdk_notify.so.6.0 00:04:40.620 LIB libspdk_keyring.a 00:04:40.620 SYMLINK libspdk_notify.so 00:04:40.620 SO libspdk_keyring.so.2.0 00:04:40.620 LIB libspdk_trace.a 00:04:40.620 SO libspdk_trace.so.11.0 00:04:40.620 SYMLINK libspdk_keyring.so 00:04:40.878 SYMLINK libspdk_trace.so 00:04:40.878 CC lib/thread/thread.o 00:04:40.878 CC lib/thread/iobuf.o 00:04:40.878 CC lib/sock/sock.o 00:04:40.878 CC lib/sock/sock_rpc.o 00:04:41.456 LIB libspdk_sock.a 00:04:41.456 SO libspdk_sock.so.10.0 00:04:41.456 SYMLINK libspdk_sock.so 00:04:41.731 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:41.732 CC lib/nvme/nvme_fabric.o 00:04:41.732 CC lib/nvme/nvme_ns_cmd.o 00:04:41.732 CC lib/nvme/nvme_ctrlr.o 00:04:41.732 CC lib/nvme/nvme_pcie_common.o 00:04:41.732 CC lib/nvme/nvme_pcie.o 00:04:41.732 CC lib/nvme/nvme.o 00:04:41.732 CC lib/nvme/nvme_ns.o 00:04:41.732 CC lib/nvme/nvme_qpair.o 00:04:42.301 CC lib/nvme/nvme_quirks.o 00:04:42.301 CC lib/nvme/nvme_transport.o 00:04:42.301 CC lib/nvme/nvme_discovery.o 00:04:42.301 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:42.560 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:42.560 LIB libspdk_thread.a 00:04:42.560 CC lib/nvme/nvme_tcp.o 00:04:42.560 SO libspdk_thread.so.10.1 00:04:42.560 CC lib/nvme/nvme_opal.o 00:04:42.560 CC lib/nvme/nvme_io_msg.o 00:04:42.560 SYMLINK libspdk_thread.so 00:04:42.818 CC lib/nvme/nvme_poll_group.o 00:04:42.818 CC lib/accel/accel.o 00:04:42.818 CC lib/accel/accel_rpc.o 00:04:42.818 CC lib/accel/accel_sw.o 00:04:43.078 CC lib/nvme/nvme_zns.o 00:04:43.078 CC lib/nvme/nvme_stubs.o 00:04:43.078 CC lib/nvme/nvme_auth.o 00:04:43.078 CC lib/nvme/nvme_cuse.o 00:04:43.078 CC lib/blob/blobstore.o 00:04:43.337 CC lib/init/json_config.o 00:04:43.337 CC lib/init/subsystem.o 00:04:43.598 CC lib/virtio/virtio.o 00:04:43.598 CC lib/virtio/virtio_vhost_user.o 00:04:43.598 CC lib/nvme/nvme_rdma.o 00:04:43.598 CC lib/init/subsystem_rpc.o 00:04:43.598 CC lib/fsdev/fsdev.o 00:04:43.859 CC lib/init/rpc.o 00:04:43.859 CC lib/fsdev/fsdev_io.o 00:04:43.859 CC lib/fsdev/fsdev_rpc.o 00:04:43.859 LIB libspdk_init.a 00:04:43.859 CC lib/virtio/virtio_vfio_user.o 00:04:43.859 SO libspdk_init.so.6.0 00:04:43.859 CC lib/virtio/virtio_pci.o 00:04:43.859 CC lib/blob/request.o 00:04:43.859 SYMLINK libspdk_init.so 00:04:43.859 CC lib/blob/zeroes.o 00:04:43.859 LIB libspdk_accel.a 00:04:44.119 SO libspdk_accel.so.16.0 00:04:44.119 SYMLINK libspdk_accel.so 00:04:44.119 CC lib/blob/blob_bs_dev.o 00:04:44.119 LIB libspdk_virtio.a 00:04:44.119 CC lib/event/app.o 00:04:44.119 CC lib/event/reactor.o 00:04:44.119 CC lib/event/log_rpc.o 00:04:44.119 CC lib/bdev/bdev.o 00:04:44.119 CC lib/event/app_rpc.o 00:04:44.381 SO libspdk_virtio.so.7.0 00:04:44.381 SYMLINK libspdk_virtio.so 00:04:44.381 CC lib/event/scheduler_static.o 00:04:44.381 CC lib/bdev/bdev_rpc.o 00:04:44.381 LIB libspdk_fsdev.a 00:04:44.381 SO libspdk_fsdev.so.1.0 00:04:44.381 CC lib/bdev/bdev_zone.o 00:04:44.381 SYMLINK libspdk_fsdev.so 00:04:44.381 CC lib/bdev/part.o 00:04:44.381 CC lib/bdev/scsi_nvme.o 00:04:44.642 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:44.642 LIB libspdk_event.a 00:04:44.642 SO libspdk_event.so.14.0 00:04:44.642 LIB libspdk_nvme.a 00:04:44.642 SYMLINK libspdk_event.so 00:04:44.903 SO libspdk_nvme.so.14.0 00:04:45.165 SYMLINK libspdk_nvme.so 00:04:45.165 LIB libspdk_fuse_dispatcher.a 00:04:45.165 SO libspdk_fuse_dispatcher.so.1.0 00:04:45.165 SYMLINK libspdk_fuse_dispatcher.so 00:04:46.555 LIB libspdk_blob.a 00:04:46.555 SO libspdk_blob.so.11.0 00:04:46.815 SYMLINK libspdk_blob.so 00:04:46.815 CC lib/lvol/lvol.o 00:04:46.816 CC lib/blobfs/blobfs.o 00:04:46.816 CC lib/blobfs/tree.o 00:04:47.076 LIB libspdk_bdev.a 00:04:47.076 SO libspdk_bdev.so.16.0 00:04:47.337 SYMLINK libspdk_bdev.so 00:04:47.337 CC lib/ftl/ftl_init.o 00:04:47.337 CC lib/ftl/ftl_layout.o 00:04:47.337 CC lib/ftl/ftl_core.o 00:04:47.337 CC lib/ftl/ftl_debug.o 00:04:47.337 CC lib/scsi/dev.o 00:04:47.337 CC lib/nbd/nbd.o 00:04:47.337 CC lib/nvmf/ctrlr.o 00:04:47.337 CC lib/ublk/ublk.o 00:04:47.600 CC lib/scsi/lun.o 00:04:47.600 CC lib/ublk/ublk_rpc.o 00:04:47.600 CC lib/nvmf/ctrlr_discovery.o 00:04:47.600 CC lib/scsi/port.o 00:04:47.861 CC lib/scsi/scsi.o 00:04:47.861 CC lib/nvmf/ctrlr_bdev.o 00:04:47.861 CC lib/ftl/ftl_io.o 00:04:47.861 LIB libspdk_lvol.a 00:04:47.861 CC lib/scsi/scsi_bdev.o 00:04:47.861 CC lib/nbd/nbd_rpc.o 00:04:47.861 LIB libspdk_blobfs.a 00:04:47.861 SO libspdk_lvol.so.10.0 00:04:47.861 CC lib/scsi/scsi_pr.o 00:04:47.861 SO libspdk_blobfs.so.10.0 00:04:47.861 SYMLINK libspdk_lvol.so 00:04:47.861 CC lib/scsi/scsi_rpc.o 00:04:48.124 SYMLINK libspdk_blobfs.so 00:04:48.124 CC lib/ftl/ftl_sb.o 00:04:48.124 LIB libspdk_nbd.a 00:04:48.124 CC lib/nvmf/subsystem.o 00:04:48.124 SO libspdk_nbd.so.7.0 00:04:48.124 SYMLINK libspdk_nbd.so 00:04:48.124 CC lib/nvmf/nvmf.o 00:04:48.124 CC lib/nvmf/nvmf_rpc.o 00:04:48.124 CC lib/nvmf/transport.o 00:04:48.124 LIB libspdk_ublk.a 00:04:48.124 CC lib/ftl/ftl_l2p.o 00:04:48.124 SO libspdk_ublk.so.3.0 00:04:48.124 SYMLINK libspdk_ublk.so 00:04:48.124 CC lib/scsi/task.o 00:04:48.124 CC lib/nvmf/tcp.o 00:04:48.384 CC lib/nvmf/stubs.o 00:04:48.384 CC lib/ftl/ftl_l2p_flat.o 00:04:48.384 CC lib/ftl/ftl_nv_cache.o 00:04:48.384 LIB libspdk_scsi.a 00:04:48.384 SO libspdk_scsi.so.9.0 00:04:48.645 SYMLINK libspdk_scsi.so 00:04:48.645 CC lib/ftl/ftl_band.o 00:04:48.645 CC lib/nvmf/mdns_server.o 00:04:48.645 CC lib/ftl/ftl_band_ops.o 00:04:48.906 CC lib/nvmf/rdma.o 00:04:48.906 CC lib/nvmf/auth.o 00:04:48.906 CC lib/ftl/ftl_writer.o 00:04:48.906 CC lib/ftl/ftl_rq.o 00:04:48.906 CC lib/ftl/ftl_reloc.o 00:04:48.906 CC lib/ftl/ftl_l2p_cache.o 00:04:49.166 CC lib/ftl/ftl_p2l.o 00:04:49.167 CC lib/ftl/ftl_p2l_log.o 00:04:49.167 CC lib/ftl/mngt/ftl_mngt.o 00:04:49.167 CC lib/iscsi/conn.o 00:04:49.427 CC lib/vhost/vhost.o 00:04:49.427 CC lib/vhost/vhost_rpc.o 00:04:49.427 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:49.427 CC lib/vhost/vhost_scsi.o 00:04:49.427 CC lib/vhost/vhost_blk.o 00:04:49.427 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:49.690 CC lib/iscsi/init_grp.o 00:04:49.690 CC lib/iscsi/iscsi.o 00:04:49.690 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:49.690 CC lib/iscsi/param.o 00:04:49.690 CC lib/vhost/rte_vhost_user.o 00:04:49.690 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:49.952 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:49.952 CC lib/iscsi/portal_grp.o 00:04:49.952 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:49.952 CC lib/iscsi/tgt_node.o 00:04:49.952 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:50.221 CC lib/iscsi/iscsi_subsystem.o 00:04:50.221 CC lib/iscsi/iscsi_rpc.o 00:04:50.221 CC lib/iscsi/task.o 00:04:50.221 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:50.221 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:50.221 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:50.221 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:50.221 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:50.221 CC lib/ftl/utils/ftl_conf.o 00:04:50.489 CC lib/ftl/utils/ftl_md.o 00:04:50.489 CC lib/ftl/utils/ftl_mempool.o 00:04:50.489 CC lib/ftl/utils/ftl_bitmap.o 00:04:50.489 LIB libspdk_vhost.a 00:04:50.489 CC lib/ftl/utils/ftl_property.o 00:04:50.489 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:50.489 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:50.489 SO libspdk_vhost.so.8.0 00:04:50.489 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:50.489 SYMLINK libspdk_vhost.so 00:04:50.748 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:50.748 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:50.748 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:50.748 LIB libspdk_iscsi.a 00:04:50.748 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:50.749 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:50.749 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:50.749 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:50.749 SO libspdk_iscsi.so.8.0 00:04:50.749 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:50.749 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:50.749 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:50.749 CC lib/ftl/base/ftl_base_dev.o 00:04:50.749 CC lib/ftl/base/ftl_base_bdev.o 00:04:51.010 SYMLINK libspdk_iscsi.so 00:04:51.010 CC lib/ftl/ftl_trace.o 00:04:51.010 LIB libspdk_ftl.a 00:04:51.010 LIB libspdk_nvmf.a 00:04:51.270 SO libspdk_nvmf.so.19.0 00:04:51.270 SO libspdk_ftl.so.9.0 00:04:51.531 SYMLINK libspdk_nvmf.so 00:04:51.531 SYMLINK libspdk_ftl.so 00:04:51.791 CC module/env_dpdk/env_dpdk_rpc.o 00:04:51.791 CC module/accel/iaa/accel_iaa.o 00:04:51.791 CC module/accel/ioat/accel_ioat.o 00:04:51.791 CC module/sock/posix/posix.o 00:04:51.791 CC module/accel/error/accel_error.o 00:04:51.791 CC module/accel/dsa/accel_dsa.o 00:04:51.791 CC module/fsdev/aio/fsdev_aio.o 00:04:51.791 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:51.791 CC module/keyring/file/keyring.o 00:04:51.791 CC module/blob/bdev/blob_bdev.o 00:04:51.792 LIB libspdk_env_dpdk_rpc.a 00:04:51.792 SO libspdk_env_dpdk_rpc.so.6.0 00:04:51.792 CC module/accel/error/accel_error_rpc.o 00:04:51.792 SYMLINK libspdk_env_dpdk_rpc.so 00:04:51.792 CC module/keyring/file/keyring_rpc.o 00:04:51.792 CC module/accel/ioat/accel_ioat_rpc.o 00:04:51.792 CC module/accel/dsa/accel_dsa_rpc.o 00:04:52.052 LIB libspdk_scheduler_dynamic.a 00:04:52.052 SO libspdk_scheduler_dynamic.so.4.0 00:04:52.052 CC module/accel/iaa/accel_iaa_rpc.o 00:04:52.052 LIB libspdk_accel_error.a 00:04:52.052 LIB libspdk_blob_bdev.a 00:04:52.052 LIB libspdk_keyring_file.a 00:04:52.052 SYMLINK libspdk_scheduler_dynamic.so 00:04:52.052 LIB libspdk_accel_ioat.a 00:04:52.052 SO libspdk_accel_error.so.2.0 00:04:52.052 SO libspdk_blob_bdev.so.11.0 00:04:52.052 SO libspdk_keyring_file.so.2.0 00:04:52.052 SO libspdk_accel_ioat.so.6.0 00:04:52.052 LIB libspdk_accel_dsa.a 00:04:52.052 SYMLINK libspdk_blob_bdev.so 00:04:52.052 SYMLINK libspdk_keyring_file.so 00:04:52.052 SO libspdk_accel_dsa.so.5.0 00:04:52.052 SYMLINK libspdk_accel_error.so 00:04:52.052 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:52.052 LIB libspdk_accel_iaa.a 00:04:52.052 SYMLINK libspdk_accel_ioat.so 00:04:52.052 SO libspdk_accel_iaa.so.3.0 00:04:52.052 SYMLINK libspdk_accel_dsa.so 00:04:52.052 CC module/fsdev/aio/linux_aio_mgr.o 00:04:52.052 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:52.052 CC module/scheduler/gscheduler/gscheduler.o 00:04:52.313 SYMLINK libspdk_accel_iaa.so 00:04:52.313 CC module/keyring/linux/keyring.o 00:04:52.313 CC module/keyring/linux/keyring_rpc.o 00:04:52.313 LIB libspdk_scheduler_dpdk_governor.a 00:04:52.313 CC module/bdev/delay/vbdev_delay.o 00:04:52.313 LIB libspdk_scheduler_gscheduler.a 00:04:52.313 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:52.313 CC module/bdev/error/vbdev_error.o 00:04:52.313 SO libspdk_scheduler_gscheduler.so.4.0 00:04:52.313 CC module/blobfs/bdev/blobfs_bdev.o 00:04:52.313 CC module/bdev/error/vbdev_error_rpc.o 00:04:52.313 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:52.313 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:52.313 SYMLINK libspdk_scheduler_gscheduler.so 00:04:52.313 LIB libspdk_keyring_linux.a 00:04:52.313 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:52.313 CC module/bdev/gpt/gpt.o 00:04:52.313 SO libspdk_keyring_linux.so.1.0 00:04:52.574 LIB libspdk_fsdev_aio.a 00:04:52.574 SYMLINK libspdk_keyring_linux.so 00:04:52.574 SO libspdk_fsdev_aio.so.1.0 00:04:52.574 LIB libspdk_sock_posix.a 00:04:52.574 LIB libspdk_bdev_error.a 00:04:52.574 SYMLINK libspdk_fsdev_aio.so 00:04:52.574 CC module/bdev/gpt/vbdev_gpt.o 00:04:52.574 SO libspdk_sock_posix.so.6.0 00:04:52.574 LIB libspdk_blobfs_bdev.a 00:04:52.574 SO libspdk_bdev_error.so.6.0 00:04:52.574 SO libspdk_blobfs_bdev.so.6.0 00:04:52.574 LIB libspdk_bdev_delay.a 00:04:52.574 SYMLINK libspdk_sock_posix.so 00:04:52.574 CC module/bdev/lvol/vbdev_lvol.o 00:04:52.574 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:52.574 SYMLINK libspdk_bdev_error.so 00:04:52.574 CC module/bdev/malloc/bdev_malloc.o 00:04:52.574 SYMLINK libspdk_blobfs_bdev.so 00:04:52.574 SO libspdk_bdev_delay.so.6.0 00:04:52.574 CC module/bdev/null/bdev_null.o 00:04:52.574 CC module/bdev/nvme/bdev_nvme.o 00:04:52.574 SYMLINK libspdk_bdev_delay.so 00:04:52.834 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:52.834 CC module/bdev/passthru/vbdev_passthru.o 00:04:52.834 CC module/bdev/raid/bdev_raid.o 00:04:52.834 CC module/bdev/split/vbdev_split.o 00:04:52.834 LIB libspdk_bdev_gpt.a 00:04:52.834 SO libspdk_bdev_gpt.so.6.0 00:04:52.834 CC module/bdev/null/bdev_null_rpc.o 00:04:52.834 SYMLINK libspdk_bdev_gpt.so 00:04:52.834 CC module/bdev/split/vbdev_split_rpc.o 00:04:52.834 CC module/bdev/nvme/nvme_rpc.o 00:04:52.834 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:53.095 CC module/bdev/nvme/bdev_mdns_client.o 00:04:53.095 LIB libspdk_bdev_lvol.a 00:04:53.095 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:53.095 SO libspdk_bdev_lvol.so.6.0 00:04:53.095 LIB libspdk_bdev_null.a 00:04:53.095 LIB libspdk_bdev_split.a 00:04:53.095 LIB libspdk_bdev_passthru.a 00:04:53.095 SO libspdk_bdev_null.so.6.0 00:04:53.095 SO libspdk_bdev_split.so.6.0 00:04:53.095 SO libspdk_bdev_passthru.so.6.0 00:04:53.095 SYMLINK libspdk_bdev_lvol.so 00:04:53.095 CC module/bdev/nvme/vbdev_opal.o 00:04:53.095 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:53.095 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:53.096 SYMLINK libspdk_bdev_split.so 00:04:53.096 SYMLINK libspdk_bdev_passthru.so 00:04:53.096 CC module/bdev/raid/bdev_raid_rpc.o 00:04:53.096 SYMLINK libspdk_bdev_null.so 00:04:53.096 CC module/bdev/raid/bdev_raid_sb.o 00:04:53.096 LIB libspdk_bdev_malloc.a 00:04:53.096 SO libspdk_bdev_malloc.so.6.0 00:04:53.356 SYMLINK libspdk_bdev_malloc.so 00:04:53.356 CC module/bdev/raid/raid0.o 00:04:53.356 CC module/bdev/raid/raid1.o 00:04:53.356 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:53.356 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:53.356 CC module/bdev/raid/concat.o 00:04:53.356 CC module/bdev/xnvme/bdev_xnvme.o 00:04:53.617 CC module/bdev/aio/bdev_aio.o 00:04:53.617 CC module/bdev/ftl/bdev_ftl.o 00:04:53.617 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:53.617 CC module/bdev/aio/bdev_aio_rpc.o 00:04:53.617 CC module/bdev/iscsi/bdev_iscsi.o 00:04:53.617 LIB libspdk_bdev_zone_block.a 00:04:53.617 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:53.617 LIB libspdk_bdev_xnvme.a 00:04:53.617 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:53.617 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:53.617 SO libspdk_bdev_zone_block.so.6.0 00:04:53.617 SO libspdk_bdev_xnvme.so.3.0 00:04:53.617 SYMLINK libspdk_bdev_zone_block.so 00:04:53.617 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:53.617 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:53.879 SYMLINK libspdk_bdev_xnvme.so 00:04:53.879 LIB libspdk_bdev_raid.a 00:04:53.879 LIB libspdk_bdev_aio.a 00:04:53.879 SO libspdk_bdev_raid.so.6.0 00:04:53.879 SO libspdk_bdev_aio.so.6.0 00:04:53.879 LIB libspdk_bdev_ftl.a 00:04:53.879 LIB libspdk_bdev_iscsi.a 00:04:53.879 SO libspdk_bdev_ftl.so.6.0 00:04:53.879 SYMLINK libspdk_bdev_aio.so 00:04:53.879 SO libspdk_bdev_iscsi.so.6.0 00:04:53.879 SYMLINK libspdk_bdev_raid.so 00:04:53.879 SYMLINK libspdk_bdev_ftl.so 00:04:53.879 SYMLINK libspdk_bdev_iscsi.so 00:04:54.140 LIB libspdk_bdev_virtio.a 00:04:54.140 SO libspdk_bdev_virtio.so.6.0 00:04:54.399 SYMLINK libspdk_bdev_virtio.so 00:04:54.662 LIB libspdk_bdev_nvme.a 00:04:54.662 SO libspdk_bdev_nvme.so.7.0 00:04:54.924 SYMLINK libspdk_bdev_nvme.so 00:04:55.185 CC module/event/subsystems/vmd/vmd.o 00:04:55.185 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:55.185 CC module/event/subsystems/sock/sock.o 00:04:55.185 CC module/event/subsystems/fsdev/fsdev.o 00:04:55.185 CC module/event/subsystems/scheduler/scheduler.o 00:04:55.185 CC module/event/subsystems/iobuf/iobuf.o 00:04:55.185 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:55.185 CC module/event/subsystems/keyring/keyring.o 00:04:55.185 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:55.185 LIB libspdk_event_vhost_blk.a 00:04:55.185 LIB libspdk_event_sock.a 00:04:55.185 LIB libspdk_event_vmd.a 00:04:55.185 LIB libspdk_event_keyring.a 00:04:55.185 LIB libspdk_event_fsdev.a 00:04:55.446 SO libspdk_event_vhost_blk.so.3.0 00:04:55.446 SO libspdk_event_sock.so.5.0 00:04:55.446 LIB libspdk_event_scheduler.a 00:04:55.446 LIB libspdk_event_iobuf.a 00:04:55.446 SO libspdk_event_fsdev.so.1.0 00:04:55.446 SO libspdk_event_keyring.so.1.0 00:04:55.446 SO libspdk_event_vmd.so.6.0 00:04:55.446 SO libspdk_event_scheduler.so.4.0 00:04:55.446 SO libspdk_event_iobuf.so.3.0 00:04:55.446 SYMLINK libspdk_event_sock.so 00:04:55.446 SYMLINK libspdk_event_vhost_blk.so 00:04:55.446 SYMLINK libspdk_event_keyring.so 00:04:55.446 SYMLINK libspdk_event_fsdev.so 00:04:55.446 SYMLINK libspdk_event_scheduler.so 00:04:55.446 SYMLINK libspdk_event_vmd.so 00:04:55.446 SYMLINK libspdk_event_iobuf.so 00:04:55.707 CC module/event/subsystems/accel/accel.o 00:04:55.707 LIB libspdk_event_accel.a 00:04:55.707 SO libspdk_event_accel.so.6.0 00:04:55.707 SYMLINK libspdk_event_accel.so 00:04:55.968 CC module/event/subsystems/bdev/bdev.o 00:04:56.229 LIB libspdk_event_bdev.a 00:04:56.229 SO libspdk_event_bdev.so.6.0 00:04:56.229 SYMLINK libspdk_event_bdev.so 00:04:56.489 CC module/event/subsystems/nbd/nbd.o 00:04:56.489 CC module/event/subsystems/scsi/scsi.o 00:04:56.489 CC module/event/subsystems/ublk/ublk.o 00:04:56.489 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:56.489 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:56.489 LIB libspdk_event_nbd.a 00:04:56.489 LIB libspdk_event_ublk.a 00:04:56.489 SO libspdk_event_ublk.so.3.0 00:04:56.489 SO libspdk_event_nbd.so.6.0 00:04:56.489 LIB libspdk_event_scsi.a 00:04:56.489 SO libspdk_event_scsi.so.6.0 00:04:56.489 SYMLINK libspdk_event_nbd.so 00:04:56.489 SYMLINK libspdk_event_ublk.so 00:04:56.489 LIB libspdk_event_nvmf.a 00:04:56.489 SYMLINK libspdk_event_scsi.so 00:04:56.490 SO libspdk_event_nvmf.so.6.0 00:04:56.751 SYMLINK libspdk_event_nvmf.so 00:04:56.751 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:56.751 CC module/event/subsystems/iscsi/iscsi.o 00:04:57.012 LIB libspdk_event_vhost_scsi.a 00:04:57.012 SO libspdk_event_vhost_scsi.so.3.0 00:04:57.012 LIB libspdk_event_iscsi.a 00:04:57.012 SO libspdk_event_iscsi.so.6.0 00:04:57.012 SYMLINK libspdk_event_vhost_scsi.so 00:04:57.012 SYMLINK libspdk_event_iscsi.so 00:04:57.274 SO libspdk.so.6.0 00:04:57.274 SYMLINK libspdk.so 00:04:57.274 CC test/rpc_client/rpc_client_test.o 00:04:57.274 TEST_HEADER include/spdk/accel.h 00:04:57.274 CXX app/trace/trace.o 00:04:57.274 TEST_HEADER include/spdk/accel_module.h 00:04:57.274 TEST_HEADER include/spdk/assert.h 00:04:57.274 TEST_HEADER include/spdk/barrier.h 00:04:57.274 TEST_HEADER include/spdk/base64.h 00:04:57.274 TEST_HEADER include/spdk/bdev.h 00:04:57.274 TEST_HEADER include/spdk/bdev_module.h 00:04:57.274 TEST_HEADER include/spdk/bdev_zone.h 00:04:57.274 TEST_HEADER include/spdk/bit_array.h 00:04:57.274 TEST_HEADER include/spdk/bit_pool.h 00:04:57.274 TEST_HEADER include/spdk/blob_bdev.h 00:04:57.274 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:57.274 TEST_HEADER include/spdk/blobfs.h 00:04:57.274 TEST_HEADER include/spdk/blob.h 00:04:57.274 TEST_HEADER include/spdk/conf.h 00:04:57.274 TEST_HEADER include/spdk/config.h 00:04:57.274 TEST_HEADER include/spdk/cpuset.h 00:04:57.274 TEST_HEADER include/spdk/crc16.h 00:04:57.274 TEST_HEADER include/spdk/crc32.h 00:04:57.274 TEST_HEADER include/spdk/crc64.h 00:04:57.274 TEST_HEADER include/spdk/dif.h 00:04:57.274 TEST_HEADER include/spdk/dma.h 00:04:57.274 TEST_HEADER include/spdk/endian.h 00:04:57.274 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:57.274 TEST_HEADER include/spdk/env_dpdk.h 00:04:57.274 TEST_HEADER include/spdk/env.h 00:04:57.274 TEST_HEADER include/spdk/event.h 00:04:57.274 TEST_HEADER include/spdk/fd_group.h 00:04:57.274 TEST_HEADER include/spdk/fd.h 00:04:57.274 TEST_HEADER include/spdk/file.h 00:04:57.274 TEST_HEADER include/spdk/fsdev.h 00:04:57.274 TEST_HEADER include/spdk/fsdev_module.h 00:04:57.274 TEST_HEADER include/spdk/ftl.h 00:04:57.274 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:57.274 CC test/thread/poller_perf/poller_perf.o 00:04:57.274 CC examples/ioat/perf/perf.o 00:04:57.274 TEST_HEADER include/spdk/gpt_spec.h 00:04:57.274 CC examples/util/zipf/zipf.o 00:04:57.274 TEST_HEADER include/spdk/hexlify.h 00:04:57.274 TEST_HEADER include/spdk/histogram_data.h 00:04:57.274 TEST_HEADER include/spdk/idxd.h 00:04:57.274 TEST_HEADER include/spdk/idxd_spec.h 00:04:57.274 TEST_HEADER include/spdk/init.h 00:04:57.274 TEST_HEADER include/spdk/ioat.h 00:04:57.274 TEST_HEADER include/spdk/ioat_spec.h 00:04:57.274 TEST_HEADER include/spdk/iscsi_spec.h 00:04:57.274 TEST_HEADER include/spdk/json.h 00:04:57.274 TEST_HEADER include/spdk/jsonrpc.h 00:04:57.274 TEST_HEADER include/spdk/keyring.h 00:04:57.274 TEST_HEADER include/spdk/keyring_module.h 00:04:57.274 TEST_HEADER include/spdk/likely.h 00:04:57.274 TEST_HEADER include/spdk/log.h 00:04:57.274 TEST_HEADER include/spdk/lvol.h 00:04:57.274 TEST_HEADER include/spdk/md5.h 00:04:57.274 TEST_HEADER include/spdk/memory.h 00:04:57.274 TEST_HEADER include/spdk/mmio.h 00:04:57.537 TEST_HEADER include/spdk/nbd.h 00:04:57.537 TEST_HEADER include/spdk/net.h 00:04:57.537 TEST_HEADER include/spdk/notify.h 00:04:57.537 TEST_HEADER include/spdk/nvme.h 00:04:57.537 CC test/dma/test_dma/test_dma.o 00:04:57.537 TEST_HEADER include/spdk/nvme_intel.h 00:04:57.537 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:57.537 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:57.537 TEST_HEADER include/spdk/nvme_spec.h 00:04:57.537 TEST_HEADER include/spdk/nvme_zns.h 00:04:57.537 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:57.537 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:57.537 TEST_HEADER include/spdk/nvmf.h 00:04:57.537 TEST_HEADER include/spdk/nvmf_spec.h 00:04:57.537 TEST_HEADER include/spdk/nvmf_transport.h 00:04:57.537 TEST_HEADER include/spdk/opal.h 00:04:57.537 TEST_HEADER include/spdk/opal_spec.h 00:04:57.537 TEST_HEADER include/spdk/pci_ids.h 00:04:57.537 CC test/app/bdev_svc/bdev_svc.o 00:04:57.537 TEST_HEADER include/spdk/pipe.h 00:04:57.537 TEST_HEADER include/spdk/queue.h 00:04:57.537 TEST_HEADER include/spdk/reduce.h 00:04:57.537 TEST_HEADER include/spdk/rpc.h 00:04:57.537 TEST_HEADER include/spdk/scheduler.h 00:04:57.537 CC test/env/mem_callbacks/mem_callbacks.o 00:04:57.537 TEST_HEADER include/spdk/scsi.h 00:04:57.537 TEST_HEADER include/spdk/scsi_spec.h 00:04:57.537 TEST_HEADER include/spdk/sock.h 00:04:57.537 TEST_HEADER include/spdk/stdinc.h 00:04:57.537 LINK rpc_client_test 00:04:57.537 TEST_HEADER include/spdk/string.h 00:04:57.537 TEST_HEADER include/spdk/thread.h 00:04:57.537 TEST_HEADER include/spdk/trace.h 00:04:57.537 TEST_HEADER include/spdk/trace_parser.h 00:04:57.537 TEST_HEADER include/spdk/tree.h 00:04:57.537 TEST_HEADER include/spdk/ublk.h 00:04:57.537 TEST_HEADER include/spdk/util.h 00:04:57.537 TEST_HEADER include/spdk/uuid.h 00:04:57.537 TEST_HEADER include/spdk/version.h 00:04:57.537 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:57.537 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:57.537 TEST_HEADER include/spdk/vhost.h 00:04:57.537 TEST_HEADER include/spdk/vmd.h 00:04:57.537 TEST_HEADER include/spdk/xor.h 00:04:57.537 TEST_HEADER include/spdk/zipf.h 00:04:57.537 CXX test/cpp_headers/accel.o 00:04:57.537 LINK poller_perf 00:04:57.537 LINK zipf 00:04:57.537 LINK ioat_perf 00:04:57.537 LINK interrupt_tgt 00:04:57.537 CXX test/cpp_headers/accel_module.o 00:04:57.537 LINK bdev_svc 00:04:57.537 CXX test/cpp_headers/assert.o 00:04:57.537 CXX test/cpp_headers/barrier.o 00:04:57.798 LINK spdk_trace 00:04:57.798 CC examples/ioat/verify/verify.o 00:04:57.798 CXX test/cpp_headers/base64.o 00:04:57.798 CC test/env/vtophys/vtophys.o 00:04:57.798 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:57.798 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:57.798 LINK mem_callbacks 00:04:57.798 CC app/trace_record/trace_record.o 00:04:57.798 CC examples/thread/thread/thread_ex.o 00:04:57.798 CXX test/cpp_headers/bdev.o 00:04:57.798 LINK test_dma 00:04:57.798 LINK verify 00:04:58.058 LINK vtophys 00:04:58.058 CC examples/sock/hello_world/hello_sock.o 00:04:58.058 CXX test/cpp_headers/bdev_module.o 00:04:58.058 CXX test/cpp_headers/bdev_zone.o 00:04:58.058 LINK spdk_trace_record 00:04:58.058 LINK thread 00:04:58.058 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:58.058 CC test/env/memory/memory_ut.o 00:04:58.058 LINK hello_sock 00:04:58.058 LINK nvme_fuzz 00:04:58.319 CXX test/cpp_headers/bit_array.o 00:04:58.319 CC test/env/pci/pci_ut.o 00:04:58.319 CC examples/vmd/lsvmd/lsvmd.o 00:04:58.319 CXX test/cpp_headers/bit_pool.o 00:04:58.319 LINK env_dpdk_post_init 00:04:58.319 CC app/nvmf_tgt/nvmf_main.o 00:04:58.319 LINK lsvmd 00:04:58.319 CXX test/cpp_headers/blob_bdev.o 00:04:58.319 CC app/iscsi_tgt/iscsi_tgt.o 00:04:58.319 CC examples/idxd/perf/perf.o 00:04:58.581 CC app/spdk_tgt/spdk_tgt.o 00:04:58.581 CC examples/vmd/led/led.o 00:04:58.581 LINK nvmf_tgt 00:04:58.581 CXX test/cpp_headers/blobfs_bdev.o 00:04:58.581 LINK pci_ut 00:04:58.581 CC app/spdk_lspci/spdk_lspci.o 00:04:58.581 LINK led 00:04:58.581 LINK iscsi_tgt 00:04:58.581 CXX test/cpp_headers/blobfs.o 00:04:58.581 LINK spdk_tgt 00:04:58.581 LINK spdk_lspci 00:04:58.842 LINK idxd_perf 00:04:58.842 CXX test/cpp_headers/blob.o 00:04:58.842 CC app/spdk_nvme_perf/perf.o 00:04:58.842 CC examples/nvme/hello_world/hello_world.o 00:04:58.842 CC app/spdk_nvme_identify/identify.o 00:04:58.842 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:58.842 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:58.842 CXX test/cpp_headers/conf.o 00:04:59.103 LINK hello_world 00:04:59.103 CC examples/accel/perf/accel_perf.o 00:04:59.103 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:59.103 CXX test/cpp_headers/config.o 00:04:59.103 CXX test/cpp_headers/cpuset.o 00:04:59.103 CC test/event/event_perf/event_perf.o 00:04:59.103 LINK memory_ut 00:04:59.103 CC examples/nvme/reconnect/reconnect.o 00:04:59.364 CXX test/cpp_headers/crc16.o 00:04:59.364 LINK hello_fsdev 00:04:59.364 LINK event_perf 00:04:59.364 LINK vhost_fuzz 00:04:59.364 CXX test/cpp_headers/crc32.o 00:04:59.364 CC test/event/reactor/reactor.o 00:04:59.364 LINK iscsi_fuzz 00:04:59.364 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:59.364 LINK spdk_nvme_identify 00:04:59.364 CC test/event/reactor_perf/reactor_perf.o 00:04:59.626 LINK reconnect 00:04:59.626 LINK accel_perf 00:04:59.626 LINK spdk_nvme_perf 00:04:59.626 CXX test/cpp_headers/crc64.o 00:04:59.626 LINK reactor 00:04:59.626 CC examples/blob/hello_world/hello_blob.o 00:04:59.626 LINK reactor_perf 00:04:59.626 CC test/event/app_repeat/app_repeat.o 00:04:59.626 CC test/app/histogram_perf/histogram_perf.o 00:04:59.626 CXX test/cpp_headers/dif.o 00:04:59.626 CC app/spdk_nvme_discover/discovery_aer.o 00:04:59.626 CC app/spdk_top/spdk_top.o 00:04:59.626 CC test/app/jsoncat/jsoncat.o 00:04:59.887 LINK histogram_perf 00:04:59.887 LINK hello_blob 00:04:59.887 LINK app_repeat 00:04:59.887 CXX test/cpp_headers/dma.o 00:04:59.887 CC examples/blob/cli/blobcli.o 00:04:59.887 LINK nvme_manage 00:04:59.887 CC examples/bdev/hello_world/hello_bdev.o 00:04:59.887 LINK spdk_nvme_discover 00:04:59.887 LINK jsoncat 00:04:59.887 CXX test/cpp_headers/endian.o 00:04:59.887 CC examples/bdev/bdevperf/bdevperf.o 00:04:59.887 CXX test/cpp_headers/env_dpdk.o 00:05:00.148 CC app/vhost/vhost.o 00:05:00.148 CC test/app/stub/stub.o 00:05:00.148 CC test/event/scheduler/scheduler.o 00:05:00.148 CC examples/nvme/arbitration/arbitration.o 00:05:00.148 LINK hello_bdev 00:05:00.148 CXX test/cpp_headers/env.o 00:05:00.148 LINK vhost 00:05:00.148 LINK stub 00:05:00.148 CC app/spdk_dd/spdk_dd.o 00:05:00.148 LINK blobcli 00:05:00.148 LINK scheduler 00:05:00.148 CXX test/cpp_headers/event.o 00:05:00.148 CXX test/cpp_headers/fd_group.o 00:05:00.409 LINK arbitration 00:05:00.409 CXX test/cpp_headers/fd.o 00:05:00.409 CXX test/cpp_headers/file.o 00:05:00.409 CXX test/cpp_headers/fsdev.o 00:05:00.409 CC app/fio/nvme/fio_plugin.o 00:05:00.409 CC examples/nvme/hotplug/hotplug.o 00:05:00.409 CC test/nvme/aer/aer.o 00:05:00.409 CC app/fio/bdev/fio_plugin.o 00:05:00.670 CXX test/cpp_headers/fsdev_module.o 00:05:00.670 LINK spdk_dd 00:05:00.670 CC test/nvme/reset/reset.o 00:05:00.670 LINK bdevperf 00:05:00.670 LINK spdk_top 00:05:00.670 CC test/nvme/sgl/sgl.o 00:05:00.670 LINK hotplug 00:05:00.670 CXX test/cpp_headers/ftl.o 00:05:00.670 CXX test/cpp_headers/fuse_dispatcher.o 00:05:00.670 LINK aer 00:05:00.670 CXX test/cpp_headers/gpt_spec.o 00:05:00.670 CXX test/cpp_headers/hexlify.o 00:05:00.969 LINK reset 00:05:00.969 CXX test/cpp_headers/histogram_data.o 00:05:00.969 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:00.969 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:00.969 CC examples/nvme/abort/abort.o 00:05:00.969 LINK sgl 00:05:00.969 CC test/nvme/e2edp/nvme_dp.o 00:05:00.969 CXX test/cpp_headers/idxd.o 00:05:00.969 LINK spdk_nvme 00:05:00.969 LINK spdk_bdev 00:05:00.969 CC test/nvme/overhead/overhead.o 00:05:00.969 LINK pmr_persistence 00:05:00.969 CXX test/cpp_headers/idxd_spec.o 00:05:00.969 CC test/nvme/err_injection/err_injection.o 00:05:00.969 CXX test/cpp_headers/init.o 00:05:00.969 LINK cmb_copy 00:05:00.969 CXX test/cpp_headers/ioat.o 00:05:01.240 CC test/nvme/startup/startup.o 00:05:01.240 CXX test/cpp_headers/ioat_spec.o 00:05:01.240 LINK nvme_dp 00:05:01.240 LINK err_injection 00:05:01.240 CXX test/cpp_headers/iscsi_spec.o 00:05:01.240 CC test/nvme/reserve/reserve.o 00:05:01.240 LINK abort 00:05:01.240 CC test/nvme/simple_copy/simple_copy.o 00:05:01.240 CC test/nvme/connect_stress/connect_stress.o 00:05:01.240 CXX test/cpp_headers/json.o 00:05:01.240 CC test/nvme/boot_partition/boot_partition.o 00:05:01.240 LINK startup 00:05:01.240 LINK overhead 00:05:01.240 CXX test/cpp_headers/jsonrpc.o 00:05:01.240 LINK connect_stress 00:05:01.240 CXX test/cpp_headers/keyring.o 00:05:01.502 CXX test/cpp_headers/keyring_module.o 00:05:01.502 LINK reserve 00:05:01.502 LINK boot_partition 00:05:01.502 CC test/nvme/compliance/nvme_compliance.o 00:05:01.502 LINK simple_copy 00:05:01.502 CXX test/cpp_headers/likely.o 00:05:01.502 CC test/nvme/fused_ordering/fused_ordering.o 00:05:01.502 CC examples/nvmf/nvmf/nvmf.o 00:05:01.502 CXX test/cpp_headers/log.o 00:05:01.502 CXX test/cpp_headers/lvol.o 00:05:01.502 CC test/nvme/fdp/fdp.o 00:05:01.502 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:01.502 CXX test/cpp_headers/md5.o 00:05:01.502 CC test/nvme/cuse/cuse.o 00:05:01.762 LINK fused_ordering 00:05:01.762 LINK nvme_compliance 00:05:01.762 LINK nvmf 00:05:01.762 CC test/accel/dif/dif.o 00:05:01.762 LINK doorbell_aers 00:05:01.762 CC test/blobfs/mkfs/mkfs.o 00:05:01.762 CXX test/cpp_headers/memory.o 00:05:01.762 CXX test/cpp_headers/mmio.o 00:05:01.762 CXX test/cpp_headers/nbd.o 00:05:01.762 LINK fdp 00:05:01.762 CC test/lvol/esnap/esnap.o 00:05:01.762 CXX test/cpp_headers/net.o 00:05:01.762 CXX test/cpp_headers/notify.o 00:05:01.762 CXX test/cpp_headers/nvme.o 00:05:02.022 CXX test/cpp_headers/nvme_intel.o 00:05:02.022 CXX test/cpp_headers/nvme_ocssd.o 00:05:02.022 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:02.023 CXX test/cpp_headers/nvme_spec.o 00:05:02.023 LINK mkfs 00:05:02.023 CXX test/cpp_headers/nvme_zns.o 00:05:02.023 CXX test/cpp_headers/nvmf_cmd.o 00:05:02.023 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:02.023 CXX test/cpp_headers/nvmf.o 00:05:02.023 CXX test/cpp_headers/nvmf_spec.o 00:05:02.023 CXX test/cpp_headers/nvmf_transport.o 00:05:02.023 CXX test/cpp_headers/opal.o 00:05:02.023 CXX test/cpp_headers/opal_spec.o 00:05:02.284 CXX test/cpp_headers/pci_ids.o 00:05:02.284 CXX test/cpp_headers/pipe.o 00:05:02.284 CXX test/cpp_headers/queue.o 00:05:02.284 CXX test/cpp_headers/reduce.o 00:05:02.284 CXX test/cpp_headers/rpc.o 00:05:02.284 CXX test/cpp_headers/scheduler.o 00:05:02.284 CXX test/cpp_headers/scsi.o 00:05:02.284 CXX test/cpp_headers/scsi_spec.o 00:05:02.284 LINK dif 00:05:02.284 CXX test/cpp_headers/sock.o 00:05:02.284 CXX test/cpp_headers/stdinc.o 00:05:02.284 CXX test/cpp_headers/string.o 00:05:02.284 CXX test/cpp_headers/thread.o 00:05:02.284 CXX test/cpp_headers/trace.o 00:05:02.284 CXX test/cpp_headers/trace_parser.o 00:05:02.284 CXX test/cpp_headers/tree.o 00:05:02.545 CXX test/cpp_headers/ublk.o 00:05:02.545 CXX test/cpp_headers/util.o 00:05:02.545 CXX test/cpp_headers/uuid.o 00:05:02.545 CXX test/cpp_headers/version.o 00:05:02.545 CXX test/cpp_headers/vfio_user_pci.o 00:05:02.545 CXX test/cpp_headers/vfio_user_spec.o 00:05:02.545 CXX test/cpp_headers/vhost.o 00:05:02.545 CXX test/cpp_headers/vmd.o 00:05:02.545 LINK cuse 00:05:02.545 CXX test/cpp_headers/xor.o 00:05:02.545 CXX test/cpp_headers/zipf.o 00:05:02.545 CC test/bdev/bdevio/bdevio.o 00:05:02.806 LINK bdevio 00:05:07.008 LINK esnap 00:05:07.008 00:05:07.008 real 1m5.310s 00:05:07.008 user 5m20.424s 00:05:07.008 sys 0m52.909s 00:05:07.008 21:46:51 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:07.008 ************************************ 00:05:07.008 END TEST make 00:05:07.008 ************************************ 00:05:07.008 21:46:51 make -- common/autotest_common.sh@10 -- $ set +x 00:05:07.008 21:46:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:07.008 21:46:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:07.008 21:46:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:07.008 21:46:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:07.008 21:46:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:07.008 21:46:51 -- pm/common@44 -- $ pid=5787 00:05:07.008 21:46:51 -- pm/common@50 -- $ kill -TERM 5787 00:05:07.008 21:46:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:07.008 21:46:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:07.008 21:46:51 -- pm/common@44 -- $ pid=5788 00:05:07.008 21:46:51 -- pm/common@50 -- $ kill -TERM 5788 00:05:07.008 21:46:51 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:07.008 21:46:51 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:07.008 21:46:51 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:07.008 21:46:51 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:07.008 21:46:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.008 21:46:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.008 21:46:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.008 21:46:51 -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.009 21:46:51 -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.009 21:46:51 -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.009 21:46:51 -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.009 21:46:51 -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.009 21:46:51 -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.009 21:46:51 -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.009 21:46:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.009 21:46:51 -- scripts/common.sh@344 -- # case "$op" in 00:05:07.009 21:46:51 -- scripts/common.sh@345 -- # : 1 00:05:07.009 21:46:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.009 21:46:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.009 21:46:51 -- scripts/common.sh@365 -- # decimal 1 00:05:07.009 21:46:51 -- scripts/common.sh@353 -- # local d=1 00:05:07.009 21:46:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.009 21:46:51 -- scripts/common.sh@355 -- # echo 1 00:05:07.009 21:46:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.009 21:46:51 -- scripts/common.sh@366 -- # decimal 2 00:05:07.009 21:46:51 -- scripts/common.sh@353 -- # local d=2 00:05:07.009 21:46:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.009 21:46:51 -- scripts/common.sh@355 -- # echo 2 00:05:07.009 21:46:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.009 21:46:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.009 21:46:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.009 21:46:51 -- scripts/common.sh@368 -- # return 0 00:05:07.009 21:46:51 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.009 21:46:51 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:07.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.009 --rc genhtml_branch_coverage=1 00:05:07.009 --rc genhtml_function_coverage=1 00:05:07.009 --rc genhtml_legend=1 00:05:07.009 --rc geninfo_all_blocks=1 00:05:07.009 --rc geninfo_unexecuted_blocks=1 00:05:07.009 00:05:07.009 ' 00:05:07.009 21:46:51 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:07.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.009 --rc genhtml_branch_coverage=1 00:05:07.009 --rc genhtml_function_coverage=1 00:05:07.009 --rc genhtml_legend=1 00:05:07.009 --rc geninfo_all_blocks=1 00:05:07.009 --rc geninfo_unexecuted_blocks=1 00:05:07.009 00:05:07.009 ' 00:05:07.009 21:46:51 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:07.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.009 --rc genhtml_branch_coverage=1 00:05:07.009 --rc genhtml_function_coverage=1 00:05:07.009 --rc genhtml_legend=1 00:05:07.009 --rc geninfo_all_blocks=1 00:05:07.009 --rc geninfo_unexecuted_blocks=1 00:05:07.009 00:05:07.009 ' 00:05:07.009 21:46:51 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:07.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.009 --rc genhtml_branch_coverage=1 00:05:07.009 --rc genhtml_function_coverage=1 00:05:07.009 --rc genhtml_legend=1 00:05:07.009 --rc geninfo_all_blocks=1 00:05:07.009 --rc geninfo_unexecuted_blocks=1 00:05:07.009 00:05:07.009 ' 00:05:07.009 21:46:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:07.009 21:46:51 -- nvmf/common.sh@7 -- # uname -s 00:05:07.009 21:46:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.009 21:46:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.009 21:46:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.009 21:46:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.009 21:46:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.009 21:46:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.009 21:46:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.009 21:46:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.009 21:46:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.009 21:46:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.009 21:46:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5187cb00-da92-4c3f-8bf1-79b20a57e81e 00:05:07.009 21:46:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5187cb00-da92-4c3f-8bf1-79b20a57e81e 00:05:07.009 21:46:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.009 21:46:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.009 21:46:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.009 21:46:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.009 21:46:51 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:07.009 21:46:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.009 21:46:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.009 21:46:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.009 21:46:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.009 21:46:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.009 21:46:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.009 21:46:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.009 21:46:51 -- paths/export.sh@5 -- # export PATH 00:05:07.009 21:46:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.009 21:46:51 -- nvmf/common.sh@51 -- # : 0 00:05:07.009 21:46:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.009 21:46:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.009 21:46:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.009 21:46:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.009 21:46:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.009 21:46:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.009 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.009 21:46:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.009 21:46:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.009 21:46:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.009 21:46:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:07.009 21:46:51 -- spdk/autotest.sh@32 -- # uname -s 00:05:07.009 21:46:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:07.009 21:46:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:07.009 21:46:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:07.009 21:46:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:07.009 21:46:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:07.009 21:46:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:07.009 21:46:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:07.009 21:46:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:07.009 21:46:51 -- spdk/autotest.sh@48 -- # udevadm_pid=67915 00:05:07.009 21:46:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:07.009 21:46:51 -- pm/common@17 -- # local monitor 00:05:07.009 21:46:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:07.009 21:46:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:07.009 21:46:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:07.009 21:46:51 -- pm/common@25 -- # sleep 1 00:05:07.009 21:46:51 -- pm/common@21 -- # date +%s 00:05:07.009 21:46:51 -- pm/common@21 -- # date +%s 00:05:07.009 21:46:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727732811 00:05:07.009 21:46:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727732811 00:05:07.009 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727732811_collect-cpu-load.pm.log 00:05:07.009 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727732811_collect-vmstat.pm.log 00:05:07.951 21:46:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:07.951 21:46:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:07.951 21:46:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:07.951 21:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:07.951 21:46:52 -- spdk/autotest.sh@59 -- # create_test_list 00:05:07.951 21:46:52 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:07.951 21:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:07.951 21:46:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:07.951 21:46:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:07.951 21:46:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:07.951 21:46:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:07.951 21:46:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:07.951 21:46:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:07.951 21:46:52 -- common/autotest_common.sh@1455 -- # uname 00:05:07.951 21:46:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:07.951 21:46:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:07.951 21:46:52 -- common/autotest_common.sh@1475 -- # uname 00:05:07.951 21:46:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:07.951 21:46:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:07.951 21:46:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:07.951 lcov: LCOV version 1.15 00:05:07.951 21:46:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:22.864 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:22.864 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:37.773 21:47:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:37.773 21:47:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.773 21:47:22 -- common/autotest_common.sh@10 -- # set +x 00:05:37.773 21:47:22 -- spdk/autotest.sh@78 -- # rm -f 00:05:37.773 21:47:22 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.773 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.340 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:38.340 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:38.340 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:38.340 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:38.340 21:47:23 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:38.340 21:47:23 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:38.340 21:47:23 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:38.340 21:47:23 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:38.340 21:47:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:38.340 21:47:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:38.340 21:47:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:38.340 21:47:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:38.340 21:47:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:38.340 21:47:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:38.340 21:47:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:38.340 21:47:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:38.340 21:47:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:38.340 21:47:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:38.340 21:47:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:38.340 21:47:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:38.340 21:47:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:38.340 21:47:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:38.340 21:47:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:38.340 21:47:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:38.340 21:47:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:38.340 21:47:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:38.340 21:47:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:38.340 21:47:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:38.340 21:47:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:38.341 21:47:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:05:38.341 21:47:23 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:05:38.341 21:47:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:38.341 21:47:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:38.341 21:47:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:38.341 21:47:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:05:38.341 21:47:23 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:05:38.341 21:47:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:38.341 21:47:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:38.341 21:47:23 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:38.341 21:47:23 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:05:38.341 21:47:23 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:05:38.341 21:47:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:38.341 21:47:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:38.341 21:47:23 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:38.341 21:47:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.341 21:47:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.341 21:47:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:38.341 21:47:23 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:38.341 21:47:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:38.341 No valid GPT data, bailing 00:05:38.341 21:47:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:38.341 21:47:23 -- scripts/common.sh@394 -- # pt= 00:05:38.341 21:47:23 -- scripts/common.sh@395 -- # return 1 00:05:38.599 21:47:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:38.599 1+0 records in 00:05:38.599 1+0 records out 00:05:38.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313892 s, 33.4 MB/s 00:05:38.599 21:47:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.599 21:47:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.599 21:47:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:38.599 21:47:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:38.599 21:47:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:38.599 No valid GPT data, bailing 00:05:38.599 21:47:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:38.599 21:47:23 -- scripts/common.sh@394 -- # pt= 00:05:38.599 21:47:23 -- scripts/common.sh@395 -- # return 1 00:05:38.599 21:47:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:38.599 1+0 records in 00:05:38.599 1+0 records out 00:05:38.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436704 s, 240 MB/s 00:05:38.599 21:47:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.599 21:47:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.599 21:47:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:38.599 21:47:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:38.599 21:47:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:38.599 No valid GPT data, bailing 00:05:38.599 21:47:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:38.599 21:47:23 -- scripts/common.sh@394 -- # pt= 00:05:38.599 21:47:23 -- scripts/common.sh@395 -- # return 1 00:05:38.599 21:47:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:38.599 1+0 records in 00:05:38.599 1+0 records out 00:05:38.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00493275 s, 213 MB/s 00:05:38.599 21:47:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.599 21:47:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.599 21:47:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:38.599 21:47:23 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:38.599 21:47:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:38.599 No valid GPT data, bailing 00:05:38.599 21:47:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:38.599 21:47:23 -- scripts/common.sh@394 -- # pt= 00:05:38.599 21:47:23 -- scripts/common.sh@395 -- # return 1 00:05:38.599 21:47:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:38.857 1+0 records in 00:05:38.857 1+0 records out 00:05:38.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00549993 s, 191 MB/s 00:05:38.857 21:47:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.857 21:47:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.857 21:47:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:38.857 21:47:23 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:38.857 21:47:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:38.857 No valid GPT data, bailing 00:05:38.857 21:47:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:38.857 21:47:23 -- scripts/common.sh@394 -- # pt= 00:05:38.857 21:47:23 -- scripts/common.sh@395 -- # return 1 00:05:38.857 21:47:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:38.858 1+0 records in 00:05:38.858 1+0 records out 00:05:38.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00533222 s, 197 MB/s 00:05:38.858 21:47:23 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:38.858 21:47:23 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:38.858 21:47:23 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:38.858 21:47:23 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:38.858 21:47:23 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:38.858 No valid GPT data, bailing 00:05:38.858 21:47:23 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:38.858 21:47:23 -- scripts/common.sh@394 -- # pt= 00:05:38.858 21:47:23 -- scripts/common.sh@395 -- # return 1 00:05:38.858 21:47:23 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:38.858 1+0 records in 00:05:38.858 1+0 records out 00:05:38.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00601105 s, 174 MB/s 00:05:38.858 21:47:23 -- spdk/autotest.sh@105 -- # sync 00:05:39.115 21:47:23 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:39.115 21:47:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:39.116 21:47:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:40.489 21:47:25 -- spdk/autotest.sh@111 -- # uname -s 00:05:40.489 21:47:25 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:40.489 21:47:25 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:40.489 21:47:25 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:41.056 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:41.313 Hugepages 00:05:41.313 node hugesize free / total 00:05:41.313 node0 1048576kB 0 / 0 00:05:41.313 node0 2048kB 0 / 0 00:05:41.313 00:05:41.313 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:41.571 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:41.571 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:41.571 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:41.571 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:41.571 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:41.829 21:47:26 -- spdk/autotest.sh@117 -- # uname -s 00:05:41.830 21:47:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:41.830 21:47:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:41.830 21:47:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:42.087 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.654 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.654 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.654 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.654 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:42.654 21:47:27 -- common/autotest_common.sh@1515 -- # sleep 1 00:05:44.027 21:47:28 -- common/autotest_common.sh@1516 -- # bdfs=() 00:05:44.027 21:47:28 -- common/autotest_common.sh@1516 -- # local bdfs 00:05:44.027 21:47:28 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:05:44.027 21:47:28 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:05:44.027 21:47:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:44.027 21:47:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:44.027 21:47:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:44.027 21:47:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:44.027 21:47:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:44.027 21:47:28 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:44.027 21:47:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:44.027 21:47:28 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:44.027 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:44.285 Waiting for block devices as requested 00:05:44.285 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:44.285 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:44.543 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:44.543 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.843 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:49.843 21:47:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:49.843 21:47:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.843 21:47:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:49.843 21:47:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:49.843 21:47:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:05:49.843 21:47:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:05:49.843 21:47:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:05:49.843 21:47:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:49.843 21:47:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:05:49.843 21:47:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:49.843 21:47:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:49.843 21:47:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:49.843 21:47:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:49.843 21:47:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:49.843 21:47:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:05:49.843 21:47:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:49.843 21:47:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:49.843 21:47:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:49.843 21:47:34 -- common/autotest_common.sh@1541 -- # continue 00:05:49.843 21:47:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:49.843 21:47:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:49.843 21:47:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:05:49.843 21:47:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:49.843 21:47:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:49.843 21:47:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:49.843 21:47:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:49.843 21:47:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:49.843 21:47:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:49.843 21:47:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:49.843 21:47:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:49.843 21:47:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:49.843 21:47:34 -- common/autotest_common.sh@1541 -- # continue 00:05:49.843 21:47:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:49.843 21:47:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:05:49.843 21:47:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:49.843 21:47:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:49.844 21:47:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:05:49.844 21:47:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:49.844 21:47:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:49.844 21:47:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:49.844 21:47:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:49.844 21:47:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:49.844 21:47:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:49.844 21:47:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:49.844 21:47:34 -- common/autotest_common.sh@1541 -- # continue 00:05:49.844 21:47:34 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:05:49.844 21:47:34 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:49.844 21:47:34 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:49.844 21:47:34 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:05:49.844 21:47:34 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:49.844 21:47:34 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:49.844 21:47:34 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:49.844 21:47:34 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:05:49.844 21:47:34 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:05:49.844 21:47:34 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:05:49.844 21:47:34 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:05:49.844 21:47:34 -- common/autotest_common.sh@1529 -- # grep oacs 00:05:49.844 21:47:34 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:05:49.844 21:47:34 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:05:49.844 21:47:34 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:05:49.844 21:47:34 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:05:49.844 21:47:34 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:05:49.844 21:47:34 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:05:49.844 21:47:34 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:05:49.844 21:47:34 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:05:49.844 21:47:34 -- common/autotest_common.sh@1541 -- # continue 00:05:49.844 21:47:34 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:49.844 21:47:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.844 21:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:49.844 21:47:34 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:49.844 21:47:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.844 21:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:49.844 21:47:34 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:50.418 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:50.679 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.679 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.679 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.941 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:50.941 21:47:35 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:50.941 21:47:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:50.941 21:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:50.941 21:47:35 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:50.941 21:47:35 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:50.941 21:47:35 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:50.941 21:47:35 -- common/autotest_common.sh@1561 -- # bdfs=() 00:05:50.941 21:47:35 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:05:50.941 21:47:35 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:05:50.941 21:47:35 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:05:50.941 21:47:35 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:05:50.941 21:47:35 -- common/autotest_common.sh@1496 -- # bdfs=() 00:05:50.941 21:47:35 -- common/autotest_common.sh@1496 -- # local bdfs 00:05:50.941 21:47:35 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:50.941 21:47:35 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:50.941 21:47:35 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:05:50.941 21:47:35 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:05:50.941 21:47:35 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:50.941 21:47:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:50.941 21:47:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:50.941 21:47:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:50.941 21:47:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.941 21:47:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:50.941 21:47:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:50.941 21:47:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:50.941 21:47:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.941 21:47:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:50.941 21:47:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:50.941 21:47:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:50.941 21:47:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.941 21:47:35 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:05:50.941 21:47:35 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:50.941 21:47:35 -- common/autotest_common.sh@1564 -- # device=0x0010 00:05:50.941 21:47:35 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:50.941 21:47:35 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:05:50.941 21:47:35 -- common/autotest_common.sh@1570 -- # return 0 00:05:50.941 21:47:35 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:50.941 21:47:35 -- common/autotest_common.sh@1578 -- # return 0 00:05:50.941 21:47:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:50.941 21:47:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:50.941 21:47:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.941 21:47:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:50.941 21:47:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:50.941 21:47:35 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:50.941 21:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:50.941 21:47:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:50.941 21:47:35 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:50.941 21:47:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.941 21:47:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.941 21:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:51.203 ************************************ 00:05:51.203 START TEST env 00:05:51.203 ************************************ 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:51.203 * Looking for test storage... 00:05:51.203 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.203 21:47:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.203 21:47:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.203 21:47:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.203 21:47:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.203 21:47:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.203 21:47:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.203 21:47:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.203 21:47:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.203 21:47:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.203 21:47:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.203 21:47:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.203 21:47:35 env -- scripts/common.sh@344 -- # case "$op" in 00:05:51.203 21:47:35 env -- scripts/common.sh@345 -- # : 1 00:05:51.203 21:47:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.203 21:47:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.203 21:47:35 env -- scripts/common.sh@365 -- # decimal 1 00:05:51.203 21:47:35 env -- scripts/common.sh@353 -- # local d=1 00:05:51.203 21:47:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.203 21:47:35 env -- scripts/common.sh@355 -- # echo 1 00:05:51.203 21:47:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.203 21:47:35 env -- scripts/common.sh@366 -- # decimal 2 00:05:51.203 21:47:35 env -- scripts/common.sh@353 -- # local d=2 00:05:51.203 21:47:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.203 21:47:35 env -- scripts/common.sh@355 -- # echo 2 00:05:51.203 21:47:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.203 21:47:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.203 21:47:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.203 21:47:35 env -- scripts/common.sh@368 -- # return 0 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.203 --rc genhtml_branch_coverage=1 00:05:51.203 --rc genhtml_function_coverage=1 00:05:51.203 --rc genhtml_legend=1 00:05:51.203 --rc geninfo_all_blocks=1 00:05:51.203 --rc geninfo_unexecuted_blocks=1 00:05:51.203 00:05:51.203 ' 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.203 --rc genhtml_branch_coverage=1 00:05:51.203 --rc genhtml_function_coverage=1 00:05:51.203 --rc genhtml_legend=1 00:05:51.203 --rc geninfo_all_blocks=1 00:05:51.203 --rc geninfo_unexecuted_blocks=1 00:05:51.203 00:05:51.203 ' 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.203 --rc genhtml_branch_coverage=1 00:05:51.203 --rc genhtml_function_coverage=1 00:05:51.203 --rc genhtml_legend=1 00:05:51.203 --rc geninfo_all_blocks=1 00:05:51.203 --rc geninfo_unexecuted_blocks=1 00:05:51.203 00:05:51.203 ' 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.203 --rc genhtml_branch_coverage=1 00:05:51.203 --rc genhtml_function_coverage=1 00:05:51.203 --rc genhtml_legend=1 00:05:51.203 --rc geninfo_all_blocks=1 00:05:51.203 --rc geninfo_unexecuted_blocks=1 00:05:51.203 00:05:51.203 ' 00:05:51.203 21:47:35 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.203 21:47:35 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.203 21:47:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.203 ************************************ 00:05:51.203 START TEST env_memory 00:05:51.203 ************************************ 00:05:51.203 21:47:35 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:51.203 00:05:51.203 00:05:51.203 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.203 http://cunit.sourceforge.net/ 00:05:51.203 00:05:51.203 00:05:51.203 Suite: memory 00:05:51.203 Test: alloc and free memory map ...[2024-09-30 21:47:35.979341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:51.203 passed 00:05:51.470 Test: mem map translation ...[2024-09-30 21:47:36.018184] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:51.470 [2024-09-30 21:47:36.018244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:51.470 [2024-09-30 21:47:36.018300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:51.470 [2024-09-30 21:47:36.018315] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:51.470 passed 00:05:51.470 Test: mem map registration ...[2024-09-30 21:47:36.086482] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:51.470 [2024-09-30 21:47:36.086532] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:51.470 passed 00:05:51.470 Test: mem map adjacent registrations ...passed 00:05:51.470 00:05:51.470 Run Summary: Type Total Ran Passed Failed Inactive 00:05:51.470 suites 1 1 n/a 0 0 00:05:51.470 tests 4 4 4 0 0 00:05:51.470 asserts 152 152 152 0 n/a 00:05:51.470 00:05:51.470 Elapsed time = 0.233 seconds 00:05:51.470 00:05:51.470 real 0m0.265s 00:05:51.470 user 0m0.233s 00:05:51.470 sys 0m0.025s 00:05:51.470 ************************************ 00:05:51.470 END TEST env_memory 00:05:51.470 ************************************ 00:05:51.470 21:47:36 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.470 21:47:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:51.470 21:47:36 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:51.470 21:47:36 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.470 21:47:36 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.470 21:47:36 env -- common/autotest_common.sh@10 -- # set +x 00:05:51.470 ************************************ 00:05:51.470 START TEST env_vtophys 00:05:51.470 ************************************ 00:05:51.470 21:47:36 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:51.737 EAL: lib.eal log level changed from notice to debug 00:05:51.737 EAL: Detected lcore 0 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 1 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 2 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 3 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 4 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 5 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 6 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 7 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 8 as core 0 on socket 0 00:05:51.737 EAL: Detected lcore 9 as core 0 on socket 0 00:05:51.737 EAL: Maximum logical cores by configuration: 128 00:05:51.737 EAL: Detected CPU lcores: 10 00:05:51.737 EAL: Detected NUMA nodes: 1 00:05:51.737 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:05:51.737 EAL: Detected shared linkage of DPDK 00:05:51.737 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:05:51.737 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:05:51.737 EAL: Registered [vdev] bus. 00:05:51.737 EAL: bus.vdev log level changed from disabled to notice 00:05:51.737 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:05:51.737 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:05:51.737 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:51.737 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:51.737 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:05:51.737 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:05:51.737 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:05:51.737 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:05:51.737 EAL: No shared files mode enabled, IPC will be disabled 00:05:51.737 EAL: No shared files mode enabled, IPC is disabled 00:05:51.737 EAL: Selected IOVA mode 'PA' 00:05:51.737 EAL: Probing VFIO support... 00:05:51.737 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:51.737 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:51.737 EAL: Ask a virtual area of 0x2e000 bytes 00:05:51.737 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:51.737 EAL: Setting up physically contiguous memory... 00:05:51.737 EAL: Setting maximum number of open files to 524288 00:05:51.737 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:51.737 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:51.737 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.737 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:51.737 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.737 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.737 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:51.737 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:51.737 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.737 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:51.737 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.737 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.737 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:51.737 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:51.737 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.737 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:51.737 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.737 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.737 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:51.737 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:51.737 EAL: Ask a virtual area of 0x61000 bytes 00:05:51.737 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:51.737 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:51.737 EAL: Ask a virtual area of 0x400000000 bytes 00:05:51.737 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:51.737 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:51.737 EAL: Hugepages will be freed exactly as allocated. 00:05:51.737 EAL: No shared files mode enabled, IPC is disabled 00:05:51.737 EAL: No shared files mode enabled, IPC is disabled 00:05:51.737 EAL: TSC frequency is ~2600000 KHz 00:05:51.737 EAL: Main lcore 0 is ready (tid=7f336a21da40;cpuset=[0]) 00:05:51.737 EAL: Trying to obtain current memory policy. 00:05:51.737 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:51.737 EAL: Restoring previous memory policy: 0 00:05:51.737 EAL: request: mp_malloc_sync 00:05:51.737 EAL: No shared files mode enabled, IPC is disabled 00:05:51.737 EAL: Heap on socket 0 was expanded by 2MB 00:05:51.737 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:51.737 EAL: No shared files mode enabled, IPC is disabled 00:05:51.737 EAL: Mem event callback 'spdk:(nil)' registered 00:05:51.737 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:51.737 00:05:51.737 00:05:51.737 CUnit - A unit testing framework for C - Version 2.1-3 00:05:51.737 http://cunit.sourceforge.net/ 00:05:51.737 00:05:51.737 00:05:51.737 Suite: components_suite 00:05:52.311 Test: vtophys_malloc_test ...passed 00:05:52.311 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:52.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.311 EAL: Restoring previous memory policy: 4 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was expanded by 4MB 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was shrunk by 4MB 00:05:52.311 EAL: Trying to obtain current memory policy. 00:05:52.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.311 EAL: Restoring previous memory policy: 4 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was expanded by 6MB 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was shrunk by 6MB 00:05:52.311 EAL: Trying to obtain current memory policy. 00:05:52.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.311 EAL: Restoring previous memory policy: 4 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was expanded by 10MB 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was shrunk by 10MB 00:05:52.311 EAL: Trying to obtain current memory policy. 00:05:52.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.311 EAL: Restoring previous memory policy: 4 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was expanded by 18MB 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was shrunk by 18MB 00:05:52.311 EAL: Trying to obtain current memory policy. 00:05:52.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.311 EAL: Restoring previous memory policy: 4 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was expanded by 34MB 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was shrunk by 34MB 00:05:52.311 EAL: Trying to obtain current memory policy. 00:05:52.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.311 EAL: Restoring previous memory policy: 4 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was expanded by 66MB 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was shrunk by 66MB 00:05:52.311 EAL: Trying to obtain current memory policy. 00:05:52.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.311 EAL: Restoring previous memory policy: 4 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was expanded by 130MB 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was shrunk by 130MB 00:05:52.311 EAL: Trying to obtain current memory policy. 00:05:52.311 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.311 EAL: Restoring previous memory policy: 4 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.311 EAL: request: mp_malloc_sync 00:05:52.311 EAL: No shared files mode enabled, IPC is disabled 00:05:52.311 EAL: Heap on socket 0 was expanded by 258MB 00:05:52.311 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.312 EAL: request: mp_malloc_sync 00:05:52.312 EAL: No shared files mode enabled, IPC is disabled 00:05:52.312 EAL: Heap on socket 0 was shrunk by 258MB 00:05:52.312 EAL: Trying to obtain current memory policy. 00:05:52.312 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.573 EAL: Restoring previous memory policy: 4 00:05:52.573 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.573 EAL: request: mp_malloc_sync 00:05:52.573 EAL: No shared files mode enabled, IPC is disabled 00:05:52.573 EAL: Heap on socket 0 was expanded by 514MB 00:05:52.573 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.573 EAL: request: mp_malloc_sync 00:05:52.573 EAL: No shared files mode enabled, IPC is disabled 00:05:52.573 EAL: Heap on socket 0 was shrunk by 514MB 00:05:52.573 EAL: Trying to obtain current memory policy. 00:05:52.573 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.835 EAL: Restoring previous memory policy: 4 00:05:52.835 EAL: Calling mem event callback 'spdk:(nil)' 00:05:52.835 EAL: request: mp_malloc_sync 00:05:52.835 EAL: No shared files mode enabled, IPC is disabled 00:05:52.835 EAL: Heap on socket 0 was expanded by 1026MB 00:05:52.835 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.097 passed 00:05:53.097 00:05:53.097 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.097 suites 1 1 n/a 0 0 00:05:53.097 tests 2 2 2 0 0 00:05:53.097 asserts 5169 5169 5169 0 n/a 00:05:53.097 00:05:53.097 Elapsed time = 1.215 seconds 00:05:53.097 EAL: request: mp_malloc_sync 00:05:53.097 EAL: No shared files mode enabled, IPC is disabled 00:05:53.097 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:53.097 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.097 EAL: request: mp_malloc_sync 00:05:53.097 EAL: No shared files mode enabled, IPC is disabled 00:05:53.097 EAL: Heap on socket 0 was shrunk by 2MB 00:05:53.097 EAL: No shared files mode enabled, IPC is disabled 00:05:53.097 EAL: No shared files mode enabled, IPC is disabled 00:05:53.097 EAL: No shared files mode enabled, IPC is disabled 00:05:53.097 00:05:53.097 real 0m1.454s 00:05:53.097 user 0m0.586s 00:05:53.097 sys 0m0.732s 00:05:53.097 21:47:37 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.097 ************************************ 00:05:53.097 END TEST env_vtophys 00:05:53.097 ************************************ 00:05:53.097 21:47:37 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:53.097 21:47:37 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.097 21:47:37 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.097 21:47:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.097 21:47:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.097 ************************************ 00:05:53.097 START TEST env_pci 00:05:53.097 ************************************ 00:05:53.097 21:47:37 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:53.097 00:05:53.097 00:05:53.097 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.097 http://cunit.sourceforge.net/ 00:05:53.097 00:05:53.097 00:05:53.097 Suite: pci 00:05:53.097 Test: pci_hook ...[2024-09-30 21:47:37.794150] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70646 has claimed it 00:05:53.097 passed 00:05:53.097 00:05:53.097 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.097 suites 1 1 n/a 0 0 00:05:53.097 tests 1 1 1 0 0 00:05:53.097 asserts 25 25 25 0 n/a 00:05:53.097 00:05:53.097 Elapsed time = 0.005 seconds 00:05:53.097 EAL: Cannot find device (10000:00:01.0) 00:05:53.097 EAL: Failed to attach device on primary process 00:05:53.097 00:05:53.097 real 0m0.059s 00:05:53.097 user 0m0.025s 00:05:53.097 sys 0m0.033s 00:05:53.097 ************************************ 00:05:53.097 END TEST env_pci 00:05:53.097 ************************************ 00:05:53.097 21:47:37 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.097 21:47:37 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:53.097 21:47:37 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:53.097 21:47:37 env -- env/env.sh@15 -- # uname 00:05:53.097 21:47:37 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:53.097 21:47:37 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:53.097 21:47:37 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.097 21:47:37 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:53.097 21:47:37 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.097 21:47:37 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.097 ************************************ 00:05:53.097 START TEST env_dpdk_post_init 00:05:53.097 ************************************ 00:05:53.097 21:47:37 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.358 EAL: Detected CPU lcores: 10 00:05:53.359 EAL: Detected NUMA nodes: 1 00:05:53.359 EAL: Detected shared linkage of DPDK 00:05:53.359 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.359 EAL: Selected IOVA mode 'PA' 00:05:53.359 Starting DPDK initialization... 00:05:53.359 Starting SPDK post initialization... 00:05:53.359 SPDK NVMe probe 00:05:53.359 Attaching to 0000:00:10.0 00:05:53.359 Attaching to 0000:00:11.0 00:05:53.359 Attaching to 0000:00:12.0 00:05:53.359 Attaching to 0000:00:13.0 00:05:53.359 Attached to 0000:00:11.0 00:05:53.359 Attached to 0000:00:13.0 00:05:53.359 Attached to 0000:00:10.0 00:05:53.359 Attached to 0000:00:12.0 00:05:53.359 Cleaning up... 00:05:53.359 00:05:53.359 real 0m0.242s 00:05:53.359 user 0m0.065s 00:05:53.359 sys 0m0.080s 00:05:53.359 ************************************ 00:05:53.359 END TEST env_dpdk_post_init 00:05:53.359 ************************************ 00:05:53.359 21:47:38 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.359 21:47:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.620 21:47:38 env -- env/env.sh@26 -- # uname 00:05:53.620 21:47:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:53.620 21:47:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.620 21:47:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.620 21:47:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.620 21:47:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.620 ************************************ 00:05:53.620 START TEST env_mem_callbacks 00:05:53.620 ************************************ 00:05:53.620 21:47:38 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:53.620 EAL: Detected CPU lcores: 10 00:05:53.620 EAL: Detected NUMA nodes: 1 00:05:53.620 EAL: Detected shared linkage of DPDK 00:05:53.620 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.620 EAL: Selected IOVA mode 'PA' 00:05:53.620 00:05:53.620 00:05:53.620 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.620 http://cunit.sourceforge.net/ 00:05:53.620 00:05:53.620 00:05:53.620 Suite: memory 00:05:53.620 Test: test ... 00:05:53.620 register 0x200000200000 2097152 00:05:53.620 malloc 3145728 00:05:53.620 register 0x200000400000 4194304 00:05:53.620 buf 0x200000500000 len 3145728 PASSED 00:05:53.620 malloc 64 00:05:53.620 buf 0x2000004fff40 len 64 PASSED 00:05:53.620 malloc 4194304 00:05:53.620 register 0x200000800000 6291456 00:05:53.620 buf 0x200000a00000 len 4194304 PASSED 00:05:53.620 free 0x200000500000 3145728 00:05:53.620 free 0x2000004fff40 64 00:05:53.620 unregister 0x200000400000 4194304 PASSED 00:05:53.620 free 0x200000a00000 4194304 00:05:53.620 unregister 0x200000800000 6291456 PASSED 00:05:53.620 malloc 8388608 00:05:53.620 register 0x200000400000 10485760 00:05:53.620 buf 0x200000600000 len 8388608 PASSED 00:05:53.620 free 0x200000600000 8388608 00:05:53.620 unregister 0x200000400000 10485760 PASSED 00:05:53.620 passed 00:05:53.620 00:05:53.620 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.620 suites 1 1 n/a 0 0 00:05:53.620 tests 1 1 1 0 0 00:05:53.620 asserts 15 15 15 0 n/a 00:05:53.620 00:05:53.620 Elapsed time = 0.013 seconds 00:05:53.620 00:05:53.620 real 0m0.181s 00:05:53.620 user 0m0.030s 00:05:53.620 sys 0m0.048s 00:05:53.620 21:47:38 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.620 21:47:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:53.620 ************************************ 00:05:53.620 END TEST env_mem_callbacks 00:05:53.620 ************************************ 00:05:53.882 00:05:53.882 real 0m2.690s 00:05:53.882 user 0m1.104s 00:05:53.882 sys 0m1.129s 00:05:53.882 21:47:38 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.882 ************************************ 00:05:53.882 END TEST env 00:05:53.882 ************************************ 00:05:53.882 21:47:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.882 21:47:38 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.882 21:47:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.882 21:47:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.882 21:47:38 -- common/autotest_common.sh@10 -- # set +x 00:05:53.882 ************************************ 00:05:53.882 START TEST rpc 00:05:53.882 ************************************ 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:53.882 * Looking for test storage... 00:05:53.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:53.882 21:47:38 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.882 21:47:38 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.882 21:47:38 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.882 21:47:38 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.882 21:47:38 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.882 21:47:38 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.882 21:47:38 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.882 21:47:38 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.882 21:47:38 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.882 21:47:38 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.882 21:47:38 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.882 21:47:38 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.882 21:47:38 rpc -- scripts/common.sh@345 -- # : 1 00:05:53.882 21:47:38 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.882 21:47:38 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.882 21:47:38 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.882 21:47:38 rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.882 21:47:38 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.882 21:47:38 rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.882 21:47:38 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.882 21:47:38 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.882 21:47:38 rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.882 21:47:38 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.882 21:47:38 rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.882 21:47:38 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.882 21:47:38 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.882 21:47:38 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.882 21:47:38 rpc -- scripts/common.sh@368 -- # return 0 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.882 --rc genhtml_branch_coverage=1 00:05:53.882 --rc genhtml_function_coverage=1 00:05:53.882 --rc genhtml_legend=1 00:05:53.882 --rc geninfo_all_blocks=1 00:05:53.882 --rc geninfo_unexecuted_blocks=1 00:05:53.882 00:05:53.882 ' 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.882 --rc genhtml_branch_coverage=1 00:05:53.882 --rc genhtml_function_coverage=1 00:05:53.882 --rc genhtml_legend=1 00:05:53.882 --rc geninfo_all_blocks=1 00:05:53.882 --rc geninfo_unexecuted_blocks=1 00:05:53.882 00:05:53.882 ' 00:05:53.882 21:47:38 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:53.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.882 --rc genhtml_branch_coverage=1 00:05:53.882 --rc genhtml_function_coverage=1 00:05:53.883 --rc genhtml_legend=1 00:05:53.883 --rc geninfo_all_blocks=1 00:05:53.883 --rc geninfo_unexecuted_blocks=1 00:05:53.883 00:05:53.883 ' 00:05:53.883 21:47:38 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:53.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.883 --rc genhtml_branch_coverage=1 00:05:53.883 --rc genhtml_function_coverage=1 00:05:53.883 --rc genhtml_legend=1 00:05:53.883 --rc geninfo_all_blocks=1 00:05:53.883 --rc geninfo_unexecuted_blocks=1 00:05:53.883 00:05:53.883 ' 00:05:53.883 21:47:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70772 00:05:53.883 21:47:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.883 21:47:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70772 00:05:53.883 21:47:38 rpc -- common/autotest_common.sh@831 -- # '[' -z 70772 ']' 00:05:53.883 21:47:38 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.883 21:47:38 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:53.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.883 21:47:38 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.883 21:47:38 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:53.883 21:47:38 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:53.883 21:47:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.144 [2024-09-30 21:47:38.750336] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:54.144 [2024-09-30 21:47:38.750494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70772 ] 00:05:54.144 [2024-09-30 21:47:38.882733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.144 [2024-09-30 21:47:38.904096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.409 [2024-09-30 21:47:38.960607] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:54.409 [2024-09-30 21:47:38.960671] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70772' to capture a snapshot of events at runtime. 00:05:54.409 [2024-09-30 21:47:38.960683] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.409 [2024-09-30 21:47:38.960702] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.409 [2024-09-30 21:47:38.960714] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70772 for offline analysis/debug. 00:05:54.409 [2024-09-30 21:47:38.960758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.987 21:47:39 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:54.987 21:47:39 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:54.987 21:47:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.987 21:47:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:54.987 21:47:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:54.987 21:47:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:54.987 21:47:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:54.987 21:47:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:54.987 21:47:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.987 ************************************ 00:05:54.987 START TEST rpc_integrity 00:05:54.987 ************************************ 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:54.987 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.987 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:54.987 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:54.987 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:54.987 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.987 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:54.987 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.987 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.987 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:54.987 { 00:05:54.987 "name": "Malloc0", 00:05:54.987 "aliases": [ 00:05:54.987 "eb032597-42ba-4329-baf9-4fa9a83784cc" 00:05:54.987 ], 00:05:54.987 "product_name": "Malloc disk", 00:05:54.987 "block_size": 512, 00:05:54.987 "num_blocks": 16384, 00:05:54.987 "uuid": "eb032597-42ba-4329-baf9-4fa9a83784cc", 00:05:54.988 "assigned_rate_limits": { 00:05:54.988 "rw_ios_per_sec": 0, 00:05:54.988 "rw_mbytes_per_sec": 0, 00:05:54.988 "r_mbytes_per_sec": 0, 00:05:54.988 "w_mbytes_per_sec": 0 00:05:54.988 }, 00:05:54.988 "claimed": false, 00:05:54.988 "zoned": false, 00:05:54.988 "supported_io_types": { 00:05:54.988 "read": true, 00:05:54.988 "write": true, 00:05:54.988 "unmap": true, 00:05:54.988 "flush": true, 00:05:54.988 "reset": true, 00:05:54.988 "nvme_admin": false, 00:05:54.988 "nvme_io": false, 00:05:54.988 "nvme_io_md": false, 00:05:54.988 "write_zeroes": true, 00:05:54.988 "zcopy": true, 00:05:54.988 "get_zone_info": false, 00:05:54.988 "zone_management": false, 00:05:54.988 "zone_append": false, 00:05:54.988 "compare": false, 00:05:54.988 "compare_and_write": false, 00:05:54.988 "abort": true, 00:05:54.988 "seek_hole": false, 00:05:54.988 "seek_data": false, 00:05:54.988 "copy": true, 00:05:54.988 "nvme_iov_md": false 00:05:54.988 }, 00:05:54.988 "memory_domains": [ 00:05:54.988 { 00:05:54.988 "dma_device_id": "system", 00:05:54.988 "dma_device_type": 1 00:05:54.988 }, 00:05:54.988 { 00:05:54.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.988 "dma_device_type": 2 00:05:54.988 } 00:05:54.988 ], 00:05:54.988 "driver_specific": {} 00:05:54.988 } 00:05:54.988 ]' 00:05:54.988 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:54.988 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:54.988 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:54.988 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.988 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.988 [2024-09-30 21:47:39.728569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:54.988 [2024-09-30 21:47:39.728659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.988 [2024-09-30 21:47:39.728686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:54.988 [2024-09-30 21:47:39.728700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.988 [2024-09-30 21:47:39.731380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.988 [2024-09-30 21:47:39.731439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:54.988 Passthru0 00:05:54.988 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.988 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:54.988 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.988 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:54.988 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.988 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:54.988 { 00:05:54.988 "name": "Malloc0", 00:05:54.988 "aliases": [ 00:05:54.988 "eb032597-42ba-4329-baf9-4fa9a83784cc" 00:05:54.988 ], 00:05:54.988 "product_name": "Malloc disk", 00:05:54.988 "block_size": 512, 00:05:54.988 "num_blocks": 16384, 00:05:54.988 "uuid": "eb032597-42ba-4329-baf9-4fa9a83784cc", 00:05:54.988 "assigned_rate_limits": { 00:05:54.988 "rw_ios_per_sec": 0, 00:05:54.988 "rw_mbytes_per_sec": 0, 00:05:54.988 "r_mbytes_per_sec": 0, 00:05:54.988 "w_mbytes_per_sec": 0 00:05:54.988 }, 00:05:54.988 "claimed": true, 00:05:54.988 "claim_type": "exclusive_write", 00:05:54.988 "zoned": false, 00:05:54.988 "supported_io_types": { 00:05:54.988 "read": true, 00:05:54.988 "write": true, 00:05:54.988 "unmap": true, 00:05:54.988 "flush": true, 00:05:54.988 "reset": true, 00:05:54.988 "nvme_admin": false, 00:05:54.988 "nvme_io": false, 00:05:54.988 "nvme_io_md": false, 00:05:54.988 "write_zeroes": true, 00:05:54.988 "zcopy": true, 00:05:54.988 "get_zone_info": false, 00:05:54.988 "zone_management": false, 00:05:54.988 "zone_append": false, 00:05:54.988 "compare": false, 00:05:54.988 "compare_and_write": false, 00:05:54.988 "abort": true, 00:05:54.988 "seek_hole": false, 00:05:54.988 "seek_data": false, 00:05:54.988 "copy": true, 00:05:54.988 "nvme_iov_md": false 00:05:54.988 }, 00:05:54.988 "memory_domains": [ 00:05:54.988 { 00:05:54.988 "dma_device_id": "system", 00:05:54.988 "dma_device_type": 1 00:05:54.988 }, 00:05:54.988 { 00:05:54.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.988 "dma_device_type": 2 00:05:54.988 } 00:05:54.988 ], 00:05:54.988 "driver_specific": {} 00:05:54.988 }, 00:05:54.988 { 00:05:54.988 "name": "Passthru0", 00:05:54.988 "aliases": [ 00:05:54.988 "b4652195-cde0-5c52-ab02-04b882c96cea" 00:05:54.988 ], 00:05:54.988 "product_name": "passthru", 00:05:54.988 "block_size": 512, 00:05:54.988 "num_blocks": 16384, 00:05:54.988 "uuid": "b4652195-cde0-5c52-ab02-04b882c96cea", 00:05:54.988 "assigned_rate_limits": { 00:05:54.988 "rw_ios_per_sec": 0, 00:05:54.988 "rw_mbytes_per_sec": 0, 00:05:54.988 "r_mbytes_per_sec": 0, 00:05:54.988 "w_mbytes_per_sec": 0 00:05:54.988 }, 00:05:54.988 "claimed": false, 00:05:54.988 "zoned": false, 00:05:54.988 "supported_io_types": { 00:05:54.988 "read": true, 00:05:54.988 "write": true, 00:05:54.988 "unmap": true, 00:05:54.988 "flush": true, 00:05:54.988 "reset": true, 00:05:54.988 "nvme_admin": false, 00:05:54.988 "nvme_io": false, 00:05:54.988 "nvme_io_md": false, 00:05:54.988 "write_zeroes": true, 00:05:54.988 "zcopy": true, 00:05:54.988 "get_zone_info": false, 00:05:54.988 "zone_management": false, 00:05:54.988 "zone_append": false, 00:05:54.988 "compare": false, 00:05:54.988 "compare_and_write": false, 00:05:54.988 "abort": true, 00:05:54.988 "seek_hole": false, 00:05:54.988 "seek_data": false, 00:05:54.988 "copy": true, 00:05:54.988 "nvme_iov_md": false 00:05:54.988 }, 00:05:54.988 "memory_domains": [ 00:05:54.988 { 00:05:54.988 "dma_device_id": "system", 00:05:54.988 "dma_device_type": 1 00:05:54.988 }, 00:05:54.988 { 00:05:54.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:54.988 "dma_device_type": 2 00:05:54.988 } 00:05:54.988 ], 00:05:54.988 "driver_specific": { 00:05:54.988 "passthru": { 00:05:54.988 "name": "Passthru0", 00:05:54.988 "base_bdev_name": "Malloc0" 00:05:54.988 } 00:05:54.988 } 00:05:54.988 } 00:05:54.988 ]' 00:05:54.988 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:54.988 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:54.988 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:54.988 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.988 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.250 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.250 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.250 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:55.250 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:55.250 21:47:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:55.250 00:05:55.250 real 0m0.241s 00:05:55.250 user 0m0.126s 00:05:55.250 sys 0m0.042s 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.250 ************************************ 00:05:55.250 END TEST rpc_integrity 00:05:55.250 ************************************ 00:05:55.250 21:47:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 21:47:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:55.250 21:47:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.250 21:47:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.250 21:47:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 ************************************ 00:05:55.250 START TEST rpc_plugins 00:05:55.250 ************************************ 00:05:55.250 21:47:39 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:55.250 21:47:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:55.250 21:47:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.250 21:47:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 21:47:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.250 21:47:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:55.250 21:47:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:55.250 21:47:39 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.250 21:47:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 21:47:39 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.250 21:47:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:55.250 { 00:05:55.250 "name": "Malloc1", 00:05:55.250 "aliases": [ 00:05:55.250 "d184d718-2057-4b42-b167-35ca971ab3ac" 00:05:55.250 ], 00:05:55.250 "product_name": "Malloc disk", 00:05:55.250 "block_size": 4096, 00:05:55.250 "num_blocks": 256, 00:05:55.250 "uuid": "d184d718-2057-4b42-b167-35ca971ab3ac", 00:05:55.250 "assigned_rate_limits": { 00:05:55.250 "rw_ios_per_sec": 0, 00:05:55.250 "rw_mbytes_per_sec": 0, 00:05:55.250 "r_mbytes_per_sec": 0, 00:05:55.250 "w_mbytes_per_sec": 0 00:05:55.250 }, 00:05:55.250 "claimed": false, 00:05:55.250 "zoned": false, 00:05:55.250 "supported_io_types": { 00:05:55.250 "read": true, 00:05:55.250 "write": true, 00:05:55.250 "unmap": true, 00:05:55.250 "flush": true, 00:05:55.250 "reset": true, 00:05:55.250 "nvme_admin": false, 00:05:55.250 "nvme_io": false, 00:05:55.250 "nvme_io_md": false, 00:05:55.250 "write_zeroes": true, 00:05:55.250 "zcopy": true, 00:05:55.250 "get_zone_info": false, 00:05:55.250 "zone_management": false, 00:05:55.250 "zone_append": false, 00:05:55.250 "compare": false, 00:05:55.250 "compare_and_write": false, 00:05:55.250 "abort": true, 00:05:55.250 "seek_hole": false, 00:05:55.250 "seek_data": false, 00:05:55.250 "copy": true, 00:05:55.250 "nvme_iov_md": false 00:05:55.250 }, 00:05:55.250 "memory_domains": [ 00:05:55.250 { 00:05:55.250 "dma_device_id": "system", 00:05:55.250 "dma_device_type": 1 00:05:55.250 }, 00:05:55.250 { 00:05:55.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.250 "dma_device_type": 2 00:05:55.250 } 00:05:55.250 ], 00:05:55.250 "driver_specific": {} 00:05:55.250 } 00:05:55.250 ]' 00:05:55.250 21:47:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:55.250 21:47:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:55.250 21:47:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:55.250 21:47:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.250 21:47:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 21:47:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.250 21:47:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:55.250 21:47:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.250 21:47:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.250 21:47:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.250 21:47:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:55.250 21:47:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:55.512 21:47:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:55.512 00:05:55.512 real 0m0.142s 00:05:55.512 user 0m0.089s 00:05:55.512 sys 0m0.014s 00:05:55.512 21:47:40 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.512 ************************************ 00:05:55.512 END TEST rpc_plugins 00:05:55.512 ************************************ 00:05:55.512 21:47:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:55.512 21:47:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:55.512 21:47:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.512 21:47:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.512 21:47:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.512 ************************************ 00:05:55.512 START TEST rpc_trace_cmd_test 00:05:55.512 ************************************ 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:55.512 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70772", 00:05:55.512 "tpoint_group_mask": "0x8", 00:05:55.512 "iscsi_conn": { 00:05:55.512 "mask": "0x2", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "scsi": { 00:05:55.512 "mask": "0x4", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "bdev": { 00:05:55.512 "mask": "0x8", 00:05:55.512 "tpoint_mask": "0xffffffffffffffff" 00:05:55.512 }, 00:05:55.512 "nvmf_rdma": { 00:05:55.512 "mask": "0x10", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "nvmf_tcp": { 00:05:55.512 "mask": "0x20", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "ftl": { 00:05:55.512 "mask": "0x40", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "blobfs": { 00:05:55.512 "mask": "0x80", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "dsa": { 00:05:55.512 "mask": "0x200", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "thread": { 00:05:55.512 "mask": "0x400", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "nvme_pcie": { 00:05:55.512 "mask": "0x800", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "iaa": { 00:05:55.512 "mask": "0x1000", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "nvme_tcp": { 00:05:55.512 "mask": "0x2000", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "bdev_nvme": { 00:05:55.512 "mask": "0x4000", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "sock": { 00:05:55.512 "mask": "0x8000", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "blob": { 00:05:55.512 "mask": "0x10000", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 }, 00:05:55.512 "bdev_raid": { 00:05:55.512 "mask": "0x20000", 00:05:55.512 "tpoint_mask": "0x0" 00:05:55.512 } 00:05:55.512 }' 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:55.512 00:05:55.512 real 0m0.172s 00:05:55.512 user 0m0.133s 00:05:55.512 sys 0m0.027s 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.512 ************************************ 00:05:55.512 END TEST rpc_trace_cmd_test 00:05:55.512 ************************************ 00:05:55.512 21:47:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 21:47:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:55.774 21:47:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:55.774 21:47:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:55.774 21:47:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.774 21:47:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.774 21:47:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 ************************************ 00:05:55.774 START TEST rpc_daemon_integrity 00:05:55.774 ************************************ 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:55.774 { 00:05:55.774 "name": "Malloc2", 00:05:55.774 "aliases": [ 00:05:55.774 "92702012-8688-422e-8a1e-44e12734fde4" 00:05:55.774 ], 00:05:55.774 "product_name": "Malloc disk", 00:05:55.774 "block_size": 512, 00:05:55.774 "num_blocks": 16384, 00:05:55.774 "uuid": "92702012-8688-422e-8a1e-44e12734fde4", 00:05:55.774 "assigned_rate_limits": { 00:05:55.774 "rw_ios_per_sec": 0, 00:05:55.774 "rw_mbytes_per_sec": 0, 00:05:55.774 "r_mbytes_per_sec": 0, 00:05:55.774 "w_mbytes_per_sec": 0 00:05:55.774 }, 00:05:55.774 "claimed": false, 00:05:55.774 "zoned": false, 00:05:55.774 "supported_io_types": { 00:05:55.774 "read": true, 00:05:55.774 "write": true, 00:05:55.774 "unmap": true, 00:05:55.774 "flush": true, 00:05:55.774 "reset": true, 00:05:55.774 "nvme_admin": false, 00:05:55.774 "nvme_io": false, 00:05:55.774 "nvme_io_md": false, 00:05:55.774 "write_zeroes": true, 00:05:55.774 "zcopy": true, 00:05:55.774 "get_zone_info": false, 00:05:55.774 "zone_management": false, 00:05:55.774 "zone_append": false, 00:05:55.774 "compare": false, 00:05:55.774 "compare_and_write": false, 00:05:55.774 "abort": true, 00:05:55.774 "seek_hole": false, 00:05:55.774 "seek_data": false, 00:05:55.774 "copy": true, 00:05:55.774 "nvme_iov_md": false 00:05:55.774 }, 00:05:55.774 "memory_domains": [ 00:05:55.774 { 00:05:55.774 "dma_device_id": "system", 00:05:55.774 "dma_device_type": 1 00:05:55.774 }, 00:05:55.774 { 00:05:55.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.774 "dma_device_type": 2 00:05:55.774 } 00:05:55.774 ], 00:05:55.774 "driver_specific": {} 00:05:55.774 } 00:05:55.774 ]' 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 [2024-09-30 21:47:40.490570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:55.774 [2024-09-30 21:47:40.490654] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:55.774 [2024-09-30 21:47:40.490676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:55.774 [2024-09-30 21:47:40.490688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:55.774 [2024-09-30 21:47:40.493241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:55.774 [2024-09-30 21:47:40.493296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:55.774 Passthru0 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.774 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:55.774 { 00:05:55.774 "name": "Malloc2", 00:05:55.774 "aliases": [ 00:05:55.774 "92702012-8688-422e-8a1e-44e12734fde4" 00:05:55.774 ], 00:05:55.774 "product_name": "Malloc disk", 00:05:55.774 "block_size": 512, 00:05:55.774 "num_blocks": 16384, 00:05:55.774 "uuid": "92702012-8688-422e-8a1e-44e12734fde4", 00:05:55.774 "assigned_rate_limits": { 00:05:55.774 "rw_ios_per_sec": 0, 00:05:55.774 "rw_mbytes_per_sec": 0, 00:05:55.774 "r_mbytes_per_sec": 0, 00:05:55.774 "w_mbytes_per_sec": 0 00:05:55.774 }, 00:05:55.774 "claimed": true, 00:05:55.774 "claim_type": "exclusive_write", 00:05:55.774 "zoned": false, 00:05:55.774 "supported_io_types": { 00:05:55.774 "read": true, 00:05:55.774 "write": true, 00:05:55.774 "unmap": true, 00:05:55.774 "flush": true, 00:05:55.774 "reset": true, 00:05:55.774 "nvme_admin": false, 00:05:55.774 "nvme_io": false, 00:05:55.774 "nvme_io_md": false, 00:05:55.774 "write_zeroes": true, 00:05:55.774 "zcopy": true, 00:05:55.774 "get_zone_info": false, 00:05:55.774 "zone_management": false, 00:05:55.774 "zone_append": false, 00:05:55.774 "compare": false, 00:05:55.774 "compare_and_write": false, 00:05:55.774 "abort": true, 00:05:55.774 "seek_hole": false, 00:05:55.774 "seek_data": false, 00:05:55.774 "copy": true, 00:05:55.774 "nvme_iov_md": false 00:05:55.774 }, 00:05:55.774 "memory_domains": [ 00:05:55.774 { 00:05:55.774 "dma_device_id": "system", 00:05:55.774 "dma_device_type": 1 00:05:55.774 }, 00:05:55.774 { 00:05:55.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.774 "dma_device_type": 2 00:05:55.774 } 00:05:55.774 ], 00:05:55.774 "driver_specific": {} 00:05:55.774 }, 00:05:55.774 { 00:05:55.774 "name": "Passthru0", 00:05:55.774 "aliases": [ 00:05:55.774 "6f18b817-2387-55c1-a4b1-8ecfecfe41f3" 00:05:55.774 ], 00:05:55.774 "product_name": "passthru", 00:05:55.774 "block_size": 512, 00:05:55.774 "num_blocks": 16384, 00:05:55.774 "uuid": "6f18b817-2387-55c1-a4b1-8ecfecfe41f3", 00:05:55.774 "assigned_rate_limits": { 00:05:55.774 "rw_ios_per_sec": 0, 00:05:55.774 "rw_mbytes_per_sec": 0, 00:05:55.774 "r_mbytes_per_sec": 0, 00:05:55.774 "w_mbytes_per_sec": 0 00:05:55.774 }, 00:05:55.774 "claimed": false, 00:05:55.774 "zoned": false, 00:05:55.774 "supported_io_types": { 00:05:55.774 "read": true, 00:05:55.774 "write": true, 00:05:55.775 "unmap": true, 00:05:55.775 "flush": true, 00:05:55.775 "reset": true, 00:05:55.775 "nvme_admin": false, 00:05:55.775 "nvme_io": false, 00:05:55.775 "nvme_io_md": false, 00:05:55.775 "write_zeroes": true, 00:05:55.775 "zcopy": true, 00:05:55.775 "get_zone_info": false, 00:05:55.775 "zone_management": false, 00:05:55.775 "zone_append": false, 00:05:55.775 "compare": false, 00:05:55.775 "compare_and_write": false, 00:05:55.775 "abort": true, 00:05:55.775 "seek_hole": false, 00:05:55.775 "seek_data": false, 00:05:55.775 "copy": true, 00:05:55.775 "nvme_iov_md": false 00:05:55.775 }, 00:05:55.775 "memory_domains": [ 00:05:55.775 { 00:05:55.775 "dma_device_id": "system", 00:05:55.775 "dma_device_type": 1 00:05:55.775 }, 00:05:55.775 { 00:05:55.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:55.775 "dma_device_type": 2 00:05:55.775 } 00:05:55.775 ], 00:05:55.775 "driver_specific": { 00:05:55.775 "passthru": { 00:05:55.775 "name": "Passthru0", 00:05:55.775 "base_bdev_name": "Malloc2" 00:05:55.775 } 00:05:55.775 } 00:05:55.775 } 00:05:55.775 ]' 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.775 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:56.036 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:56.036 21:47:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:56.036 00:05:56.036 real 0m0.245s 00:05:56.036 user 0m0.137s 00:05:56.036 sys 0m0.037s 00:05:56.036 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.036 ************************************ 00:05:56.036 END TEST rpc_daemon_integrity 00:05:56.036 ************************************ 00:05:56.036 21:47:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:56.036 21:47:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:56.036 21:47:40 rpc -- rpc/rpc.sh@84 -- # killprocess 70772 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@950 -- # '[' -z 70772 ']' 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@954 -- # kill -0 70772 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@955 -- # uname 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70772 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:56.036 killing process with pid 70772 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70772' 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@969 -- # kill 70772 00:05:56.036 21:47:40 rpc -- common/autotest_common.sh@974 -- # wait 70772 00:05:56.298 ************************************ 00:05:56.298 END TEST rpc 00:05:56.298 ************************************ 00:05:56.298 00:05:56.298 real 0m2.517s 00:05:56.298 user 0m2.919s 00:05:56.298 sys 0m0.699s 00:05:56.298 21:47:41 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.298 21:47:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.298 21:47:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:56.298 21:47:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.298 21:47:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.298 21:47:41 -- common/autotest_common.sh@10 -- # set +x 00:05:56.298 ************************************ 00:05:56.298 START TEST skip_rpc 00:05:56.298 ************************************ 00:05:56.298 21:47:41 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:56.559 * Looking for test storage... 00:05:56.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.559 21:47:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:56.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.559 --rc genhtml_branch_coverage=1 00:05:56.559 --rc genhtml_function_coverage=1 00:05:56.559 --rc genhtml_legend=1 00:05:56.559 --rc geninfo_all_blocks=1 00:05:56.559 --rc geninfo_unexecuted_blocks=1 00:05:56.559 00:05:56.559 ' 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:56.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.559 --rc genhtml_branch_coverage=1 00:05:56.559 --rc genhtml_function_coverage=1 00:05:56.559 --rc genhtml_legend=1 00:05:56.559 --rc geninfo_all_blocks=1 00:05:56.559 --rc geninfo_unexecuted_blocks=1 00:05:56.559 00:05:56.559 ' 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:56.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.559 --rc genhtml_branch_coverage=1 00:05:56.559 --rc genhtml_function_coverage=1 00:05:56.559 --rc genhtml_legend=1 00:05:56.559 --rc geninfo_all_blocks=1 00:05:56.559 --rc geninfo_unexecuted_blocks=1 00:05:56.559 00:05:56.559 ' 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:56.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.559 --rc genhtml_branch_coverage=1 00:05:56.559 --rc genhtml_function_coverage=1 00:05:56.559 --rc genhtml_legend=1 00:05:56.559 --rc geninfo_all_blocks=1 00:05:56.559 --rc geninfo_unexecuted_blocks=1 00:05:56.559 00:05:56.559 ' 00:05:56.559 21:47:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:56.559 21:47:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:56.559 21:47:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.559 21:47:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.559 ************************************ 00:05:56.559 START TEST skip_rpc 00:05:56.559 ************************************ 00:05:56.559 21:47:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:56.559 21:47:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70974 00:05:56.560 21:47:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:56.560 21:47:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.560 21:47:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:56.560 [2024-09-30 21:47:41.330062] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:05:56.560 [2024-09-30 21:47:41.330237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70974 ] 00:05:56.822 [2024-09-30 21:47:41.463135] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:56.822 [2024-09-30 21:47:41.484511] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.822 [2024-09-30 21:47:41.534309] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70974 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 70974 ']' 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 70974 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70974 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.224 killing process with pid 70974 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70974' 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 70974 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 70974 00:06:02.224 00:06:02.224 real 0m5.273s 00:06:02.224 user 0m4.858s 00:06:02.224 sys 0m0.307s 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.224 ************************************ 00:06:02.224 END TEST skip_rpc 00:06:02.224 ************************************ 00:06:02.224 21:47:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.224 21:47:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:02.224 21:47:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.224 21:47:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.224 21:47:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.224 ************************************ 00:06:02.224 START TEST skip_rpc_with_json 00:06:02.224 ************************************ 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=71056 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 71056 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 71056 ']' 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:02.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.224 21:47:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.224 [2024-09-30 21:47:46.645870] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:02.224 [2024-09-30 21:47:46.646060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71056 ] 00:06:02.224 [2024-09-30 21:47:46.777002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:02.224 [2024-09-30 21:47:46.795496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.224 [2024-09-30 21:47:46.837561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.792 [2024-09-30 21:47:47.483303] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:02.792 request: 00:06:02.792 { 00:06:02.792 "trtype": "tcp", 00:06:02.792 "method": "nvmf_get_transports", 00:06:02.792 "req_id": 1 00:06:02.792 } 00:06:02.792 Got JSON-RPC error response 00:06:02.792 response: 00:06:02.792 { 00:06:02.792 "code": -19, 00:06:02.792 "message": "No such device" 00:06:02.792 } 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.792 [2024-09-30 21:47:47.495390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.792 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:03.051 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:03.051 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:03.051 { 00:06:03.051 "subsystems": [ 00:06:03.051 { 00:06:03.051 "subsystem": "fsdev", 00:06:03.051 "config": [ 00:06:03.051 { 00:06:03.051 "method": "fsdev_set_opts", 00:06:03.051 "params": { 00:06:03.051 "fsdev_io_pool_size": 65535, 00:06:03.051 "fsdev_io_cache_size": 256 00:06:03.051 } 00:06:03.051 } 00:06:03.051 ] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "keyring", 00:06:03.051 "config": [] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "iobuf", 00:06:03.051 "config": [ 00:06:03.051 { 00:06:03.051 "method": "iobuf_set_options", 00:06:03.051 "params": { 00:06:03.051 "small_pool_count": 8192, 00:06:03.051 "large_pool_count": 1024, 00:06:03.051 "small_bufsize": 8192, 00:06:03.051 "large_bufsize": 135168 00:06:03.051 } 00:06:03.051 } 00:06:03.051 ] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "sock", 00:06:03.051 "config": [ 00:06:03.051 { 00:06:03.051 "method": "sock_set_default_impl", 00:06:03.051 "params": { 00:06:03.051 "impl_name": "posix" 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "sock_impl_set_options", 00:06:03.051 "params": { 00:06:03.051 "impl_name": "ssl", 00:06:03.051 "recv_buf_size": 4096, 00:06:03.051 "send_buf_size": 4096, 00:06:03.051 "enable_recv_pipe": true, 00:06:03.051 "enable_quickack": false, 00:06:03.051 "enable_placement_id": 0, 00:06:03.051 "enable_zerocopy_send_server": true, 00:06:03.051 "enable_zerocopy_send_client": false, 00:06:03.051 "zerocopy_threshold": 0, 00:06:03.051 "tls_version": 0, 00:06:03.051 "enable_ktls": false 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "sock_impl_set_options", 00:06:03.051 "params": { 00:06:03.051 "impl_name": "posix", 00:06:03.051 "recv_buf_size": 2097152, 00:06:03.051 "send_buf_size": 2097152, 00:06:03.051 "enable_recv_pipe": true, 00:06:03.051 "enable_quickack": false, 00:06:03.051 "enable_placement_id": 0, 00:06:03.051 "enable_zerocopy_send_server": true, 00:06:03.051 "enable_zerocopy_send_client": false, 00:06:03.051 "zerocopy_threshold": 0, 00:06:03.051 "tls_version": 0, 00:06:03.051 "enable_ktls": false 00:06:03.051 } 00:06:03.051 } 00:06:03.051 ] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "vmd", 00:06:03.051 "config": [] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "accel", 00:06:03.051 "config": [ 00:06:03.051 { 00:06:03.051 "method": "accel_set_options", 00:06:03.051 "params": { 00:06:03.051 "small_cache_size": 128, 00:06:03.051 "large_cache_size": 16, 00:06:03.051 "task_count": 2048, 00:06:03.051 "sequence_count": 2048, 00:06:03.051 "buf_count": 2048 00:06:03.051 } 00:06:03.051 } 00:06:03.051 ] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "bdev", 00:06:03.051 "config": [ 00:06:03.051 { 00:06:03.051 "method": "bdev_set_options", 00:06:03.051 "params": { 00:06:03.051 "bdev_io_pool_size": 65535, 00:06:03.051 "bdev_io_cache_size": 256, 00:06:03.051 "bdev_auto_examine": true, 00:06:03.051 "iobuf_small_cache_size": 128, 00:06:03.051 "iobuf_large_cache_size": 16 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "bdev_raid_set_options", 00:06:03.051 "params": { 00:06:03.051 "process_window_size_kb": 1024, 00:06:03.051 "process_max_bandwidth_mb_sec": 0 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "bdev_iscsi_set_options", 00:06:03.051 "params": { 00:06:03.051 "timeout_sec": 30 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "bdev_nvme_set_options", 00:06:03.051 "params": { 00:06:03.051 "action_on_timeout": "none", 00:06:03.051 "timeout_us": 0, 00:06:03.051 "timeout_admin_us": 0, 00:06:03.051 "keep_alive_timeout_ms": 10000, 00:06:03.051 "arbitration_burst": 0, 00:06:03.051 "low_priority_weight": 0, 00:06:03.051 "medium_priority_weight": 0, 00:06:03.051 "high_priority_weight": 0, 00:06:03.051 "nvme_adminq_poll_period_us": 10000, 00:06:03.051 "nvme_ioq_poll_period_us": 0, 00:06:03.051 "io_queue_requests": 0, 00:06:03.051 "delay_cmd_submit": true, 00:06:03.051 "transport_retry_count": 4, 00:06:03.051 "bdev_retry_count": 3, 00:06:03.051 "transport_ack_timeout": 0, 00:06:03.051 "ctrlr_loss_timeout_sec": 0, 00:06:03.051 "reconnect_delay_sec": 0, 00:06:03.051 "fast_io_fail_timeout_sec": 0, 00:06:03.051 "disable_auto_failback": false, 00:06:03.051 "generate_uuids": false, 00:06:03.051 "transport_tos": 0, 00:06:03.051 "nvme_error_stat": false, 00:06:03.051 "rdma_srq_size": 0, 00:06:03.051 "io_path_stat": false, 00:06:03.051 "allow_accel_sequence": false, 00:06:03.051 "rdma_max_cq_size": 0, 00:06:03.051 "rdma_cm_event_timeout_ms": 0, 00:06:03.051 "dhchap_digests": [ 00:06:03.051 "sha256", 00:06:03.051 "sha384", 00:06:03.051 "sha512" 00:06:03.051 ], 00:06:03.051 "dhchap_dhgroups": [ 00:06:03.051 "null", 00:06:03.051 "ffdhe2048", 00:06:03.051 "ffdhe3072", 00:06:03.051 "ffdhe4096", 00:06:03.051 "ffdhe6144", 00:06:03.051 "ffdhe8192" 00:06:03.051 ] 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "bdev_nvme_set_hotplug", 00:06:03.051 "params": { 00:06:03.051 "period_us": 100000, 00:06:03.051 "enable": false 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "bdev_wait_for_examine" 00:06:03.051 } 00:06:03.051 ] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "scsi", 00:06:03.051 "config": null 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "scheduler", 00:06:03.051 "config": [ 00:06:03.051 { 00:06:03.051 "method": "framework_set_scheduler", 00:06:03.051 "params": { 00:06:03.051 "name": "static" 00:06:03.051 } 00:06:03.051 } 00:06:03.051 ] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "vhost_scsi", 00:06:03.051 "config": [] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "vhost_blk", 00:06:03.051 "config": [] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "ublk", 00:06:03.051 "config": [] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "nbd", 00:06:03.051 "config": [] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "nvmf", 00:06:03.051 "config": [ 00:06:03.051 { 00:06:03.051 "method": "nvmf_set_config", 00:06:03.051 "params": { 00:06:03.051 "discovery_filter": "match_any", 00:06:03.051 "admin_cmd_passthru": { 00:06:03.051 "identify_ctrlr": false 00:06:03.051 }, 00:06:03.051 "dhchap_digests": [ 00:06:03.051 "sha256", 00:06:03.051 "sha384", 00:06:03.051 "sha512" 00:06:03.051 ], 00:06:03.051 "dhchap_dhgroups": [ 00:06:03.051 "null", 00:06:03.051 "ffdhe2048", 00:06:03.051 "ffdhe3072", 00:06:03.051 "ffdhe4096", 00:06:03.051 "ffdhe6144", 00:06:03.051 "ffdhe8192" 00:06:03.051 ] 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "nvmf_set_max_subsystems", 00:06:03.051 "params": { 00:06:03.051 "max_subsystems": 1024 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "nvmf_set_crdt", 00:06:03.051 "params": { 00:06:03.051 "crdt1": 0, 00:06:03.051 "crdt2": 0, 00:06:03.051 "crdt3": 0 00:06:03.051 } 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "method": "nvmf_create_transport", 00:06:03.051 "params": { 00:06:03.051 "trtype": "TCP", 00:06:03.051 "max_queue_depth": 128, 00:06:03.051 "max_io_qpairs_per_ctrlr": 127, 00:06:03.051 "in_capsule_data_size": 4096, 00:06:03.051 "max_io_size": 131072, 00:06:03.051 "io_unit_size": 131072, 00:06:03.051 "max_aq_depth": 128, 00:06:03.051 "num_shared_buffers": 511, 00:06:03.051 "buf_cache_size": 4294967295, 00:06:03.051 "dif_insert_or_strip": false, 00:06:03.051 "zcopy": false, 00:06:03.051 "c2h_success": true, 00:06:03.051 "sock_priority": 0, 00:06:03.051 "abort_timeout_sec": 1, 00:06:03.051 "ack_timeout": 0, 00:06:03.051 "data_wr_pool_size": 0 00:06:03.051 } 00:06:03.051 } 00:06:03.051 ] 00:06:03.051 }, 00:06:03.051 { 00:06:03.051 "subsystem": "iscsi", 00:06:03.051 "config": [ 00:06:03.051 { 00:06:03.051 "method": "iscsi_set_options", 00:06:03.051 "params": { 00:06:03.051 "node_base": "iqn.2016-06.io.spdk", 00:06:03.051 "max_sessions": 128, 00:06:03.051 "max_connections_per_session": 2, 00:06:03.051 "max_queue_depth": 64, 00:06:03.051 "default_time2wait": 2, 00:06:03.051 "default_time2retain": 20, 00:06:03.051 "first_burst_length": 8192, 00:06:03.051 "immediate_data": true, 00:06:03.051 "allow_duplicated_isid": false, 00:06:03.051 "error_recovery_level": 0, 00:06:03.051 "nop_timeout": 60, 00:06:03.051 "nop_in_interval": 30, 00:06:03.051 "disable_chap": false, 00:06:03.051 "require_chap": false, 00:06:03.051 "mutual_chap": false, 00:06:03.051 "chap_group": 0, 00:06:03.051 "max_large_datain_per_connection": 64, 00:06:03.052 "max_r2t_per_connection": 4, 00:06:03.052 "pdu_pool_size": 36864, 00:06:03.052 "immediate_data_pool_size": 16384, 00:06:03.052 "data_out_pool_size": 2048 00:06:03.052 } 00:06:03.052 } 00:06:03.052 ] 00:06:03.052 } 00:06:03.052 ] 00:06:03.052 } 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 71056 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71056 ']' 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71056 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71056 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:03.052 killing process with pid 71056 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71056' 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71056 00:06:03.052 21:47:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71056 00:06:03.308 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=71085 00:06:03.308 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:03.308 21:47:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:08.578 21:47:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 71085 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 71085 ']' 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 71085 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71085 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.579 killing process with pid 71085 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71085' 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 71085 00:06:08.579 21:47:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 71085 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.579 00:06:08.579 real 0m6.620s 00:06:08.579 user 0m6.325s 00:06:08.579 sys 0m0.529s 00:06:08.579 ************************************ 00:06:08.579 END TEST skip_rpc_with_json 00:06:08.579 ************************************ 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.579 21:47:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:08.579 21:47:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.579 21:47:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.579 21:47:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.579 ************************************ 00:06:08.579 START TEST skip_rpc_with_delay 00:06:08.579 ************************************ 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.579 [2024-09-30 21:47:53.320680] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:08.579 [2024-09-30 21:47:53.320796] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.579 00:06:08.579 real 0m0.117s 00:06:08.579 user 0m0.060s 00:06:08.579 sys 0m0.055s 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.579 21:47:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:08.579 ************************************ 00:06:08.579 END TEST skip_rpc_with_delay 00:06:08.579 ************************************ 00:06:08.838 21:47:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:08.838 21:47:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:08.838 21:47:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:08.838 21:47:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.838 21:47:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.838 21:47:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.838 ************************************ 00:06:08.838 START TEST exit_on_failed_rpc_init 00:06:08.838 ************************************ 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=71196 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 71196 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 71196 ']' 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.838 21:47:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.838 [2024-09-30 21:47:53.481826] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:08.838 [2024-09-30 21:47:53.481914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71196 ] 00:06:08.838 [2024-09-30 21:47:53.604100] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.838 [2024-09-30 21:47:53.622959] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.097 [2024-09-30 21:47:53.652003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.663 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.664 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.664 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.664 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.664 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:09.664 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.664 [2024-09-30 21:47:54.382828] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:09.664 [2024-09-30 21:47:54.382942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71209 ] 00:06:09.922 [2024-09-30 21:47:54.510858] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.922 [2024-09-30 21:47:54.529696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.922 [2024-09-30 21:47:54.561557] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.922 [2024-09-30 21:47:54.561637] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:09.922 [2024-09-30 21:47:54.561652] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:09.922 [2024-09-30 21:47:54.561663] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 71196 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 71196 ']' 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 71196 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71196 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71196' 00:06:09.922 killing process with pid 71196 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 71196 00:06:09.922 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 71196 00:06:10.181 00:06:10.181 real 0m1.488s 00:06:10.181 user 0m1.653s 00:06:10.181 sys 0m0.357s 00:06:10.181 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.181 ************************************ 00:06:10.181 END TEST exit_on_failed_rpc_init 00:06:10.181 ************************************ 00:06:10.181 21:47:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:10.181 21:47:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:10.181 ************************************ 00:06:10.181 END TEST skip_rpc 00:06:10.181 00:06:10.181 real 0m13.873s 00:06:10.181 user 0m13.049s 00:06:10.181 sys 0m1.398s 00:06:10.181 21:47:54 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.181 21:47:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.181 ************************************ 00:06:10.438 21:47:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.438 21:47:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.438 21:47:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.438 21:47:55 -- common/autotest_common.sh@10 -- # set +x 00:06:10.438 ************************************ 00:06:10.438 START TEST rpc_client 00:06:10.438 ************************************ 00:06:10.438 21:47:55 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:10.438 * Looking for test storage... 00:06:10.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:10.438 21:47:55 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.438 21:47:55 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.438 21:47:55 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.438 21:47:55 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.438 21:47:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.438 21:47:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.439 21:47:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:10.439 21:47:55 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.439 21:47:55 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.439 --rc genhtml_branch_coverage=1 00:06:10.439 --rc genhtml_function_coverage=1 00:06:10.439 --rc genhtml_legend=1 00:06:10.439 --rc geninfo_all_blocks=1 00:06:10.439 --rc geninfo_unexecuted_blocks=1 00:06:10.439 00:06:10.439 ' 00:06:10.439 21:47:55 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.439 --rc genhtml_branch_coverage=1 00:06:10.439 --rc genhtml_function_coverage=1 00:06:10.439 --rc genhtml_legend=1 00:06:10.439 --rc geninfo_all_blocks=1 00:06:10.439 --rc geninfo_unexecuted_blocks=1 00:06:10.439 00:06:10.439 ' 00:06:10.439 21:47:55 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.439 --rc genhtml_branch_coverage=1 00:06:10.439 --rc genhtml_function_coverage=1 00:06:10.439 --rc genhtml_legend=1 00:06:10.439 --rc geninfo_all_blocks=1 00:06:10.439 --rc geninfo_unexecuted_blocks=1 00:06:10.439 00:06:10.439 ' 00:06:10.439 21:47:55 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.439 --rc genhtml_branch_coverage=1 00:06:10.439 --rc genhtml_function_coverage=1 00:06:10.439 --rc genhtml_legend=1 00:06:10.439 --rc geninfo_all_blocks=1 00:06:10.439 --rc geninfo_unexecuted_blocks=1 00:06:10.439 00:06:10.439 ' 00:06:10.439 21:47:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:10.439 OK 00:06:10.439 21:47:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:10.439 00:06:10.439 real 0m0.177s 00:06:10.439 user 0m0.105s 00:06:10.439 sys 0m0.077s 00:06:10.439 21:47:55 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.439 ************************************ 00:06:10.439 END TEST rpc_client 00:06:10.439 ************************************ 00:06:10.439 21:47:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:10.439 21:47:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.439 21:47:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.439 21:47:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.439 21:47:55 -- common/autotest_common.sh@10 -- # set +x 00:06:10.439 ************************************ 00:06:10.439 START TEST json_config 00:06:10.439 ************************************ 00:06:10.439 21:47:55 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:10.696 21:47:55 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.696 21:47:55 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.696 21:47:55 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.696 21:47:55 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.696 21:47:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.696 21:47:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.696 21:47:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.696 21:47:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.696 21:47:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.696 21:47:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.696 21:47:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.696 21:47:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.696 21:47:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.696 21:47:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.696 21:47:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.696 21:47:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:10.696 21:47:55 json_config -- scripts/common.sh@345 -- # : 1 00:06:10.696 21:47:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.696 21:47:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.696 21:47:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:10.696 21:47:55 json_config -- scripts/common.sh@353 -- # local d=1 00:06:10.696 21:47:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.696 21:47:55 json_config -- scripts/common.sh@355 -- # echo 1 00:06:10.696 21:47:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.696 21:47:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:10.696 21:47:55 json_config -- scripts/common.sh@353 -- # local d=2 00:06:10.697 21:47:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.697 21:47:55 json_config -- scripts/common.sh@355 -- # echo 2 00:06:10.697 21:47:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.697 21:47:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.697 21:47:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.697 21:47:55 json_config -- scripts/common.sh@368 -- # return 0 00:06:10.697 21:47:55 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.697 21:47:55 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.697 --rc genhtml_branch_coverage=1 00:06:10.697 --rc genhtml_function_coverage=1 00:06:10.697 --rc genhtml_legend=1 00:06:10.697 --rc geninfo_all_blocks=1 00:06:10.697 --rc geninfo_unexecuted_blocks=1 00:06:10.697 00:06:10.697 ' 00:06:10.697 21:47:55 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.697 --rc genhtml_branch_coverage=1 00:06:10.697 --rc genhtml_function_coverage=1 00:06:10.697 --rc genhtml_legend=1 00:06:10.697 --rc geninfo_all_blocks=1 00:06:10.697 --rc geninfo_unexecuted_blocks=1 00:06:10.697 00:06:10.697 ' 00:06:10.697 21:47:55 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.697 --rc genhtml_branch_coverage=1 00:06:10.697 --rc genhtml_function_coverage=1 00:06:10.697 --rc genhtml_legend=1 00:06:10.697 --rc geninfo_all_blocks=1 00:06:10.697 --rc geninfo_unexecuted_blocks=1 00:06:10.697 00:06:10.697 ' 00:06:10.697 21:47:55 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.697 --rc genhtml_branch_coverage=1 00:06:10.697 --rc genhtml_function_coverage=1 00:06:10.697 --rc genhtml_legend=1 00:06:10.697 --rc geninfo_all_blocks=1 00:06:10.697 --rc geninfo_unexecuted_blocks=1 00:06:10.697 00:06:10.697 ' 00:06:10.697 21:47:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5187cb00-da92-4c3f-8bf1-79b20a57e81e 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5187cb00-da92-4c3f-8bf1-79b20a57e81e 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.697 21:47:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.697 21:47:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.697 21:47:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.697 21:47:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.697 21:47:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.697 21:47:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.697 21:47:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.697 21:47:55 json_config -- paths/export.sh@5 -- # export PATH 00:06:10.697 21:47:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@51 -- # : 0 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.697 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.697 21:47:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.697 21:47:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.697 21:47:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:10.697 21:47:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:10.697 21:47:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:10.697 21:47:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:10.697 WARNING: No tests are enabled so not running JSON configuration tests 00:06:10.697 21:47:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:10.697 21:47:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:10.697 00:06:10.697 real 0m0.134s 00:06:10.697 user 0m0.091s 00:06:10.697 sys 0m0.045s 00:06:10.697 21:47:55 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.697 21:47:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.697 ************************************ 00:06:10.697 END TEST json_config 00:06:10.697 ************************************ 00:06:10.697 21:47:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:10.697 21:47:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.697 21:47:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.697 21:47:55 -- common/autotest_common.sh@10 -- # set +x 00:06:10.697 ************************************ 00:06:10.697 START TEST json_config_extra_key 00:06:10.697 ************************************ 00:06:10.697 21:47:55 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:10.697 21:47:55 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:10.697 21:47:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:10.697 21:47:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:10.954 21:47:55 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.954 21:47:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:10.954 21:47:55 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.955 --rc genhtml_branch_coverage=1 00:06:10.955 --rc genhtml_function_coverage=1 00:06:10.955 --rc genhtml_legend=1 00:06:10.955 --rc geninfo_all_blocks=1 00:06:10.955 --rc geninfo_unexecuted_blocks=1 00:06:10.955 00:06:10.955 ' 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.955 --rc genhtml_branch_coverage=1 00:06:10.955 --rc genhtml_function_coverage=1 00:06:10.955 --rc genhtml_legend=1 00:06:10.955 --rc geninfo_all_blocks=1 00:06:10.955 --rc geninfo_unexecuted_blocks=1 00:06:10.955 00:06:10.955 ' 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.955 --rc genhtml_branch_coverage=1 00:06:10.955 --rc genhtml_function_coverage=1 00:06:10.955 --rc genhtml_legend=1 00:06:10.955 --rc geninfo_all_blocks=1 00:06:10.955 --rc geninfo_unexecuted_blocks=1 00:06:10.955 00:06:10.955 ' 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.955 --rc genhtml_branch_coverage=1 00:06:10.955 --rc genhtml_function_coverage=1 00:06:10.955 --rc genhtml_legend=1 00:06:10.955 --rc geninfo_all_blocks=1 00:06:10.955 --rc geninfo_unexecuted_blocks=1 00:06:10.955 00:06:10.955 ' 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5187cb00-da92-4c3f-8bf1-79b20a57e81e 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5187cb00-da92-4c3f-8bf1-79b20a57e81e 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:10.955 21:47:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.955 21:47:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.955 21:47:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.955 21:47:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.955 21:47:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.955 21:47:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.955 21:47:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.955 21:47:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.955 21:47:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.955 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.955 21:47:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.955 INFO: launching applications... 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.955 21:47:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71391 00:06:10.955 Waiting for target to run... 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71391 /var/tmp/spdk_tgt.sock 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 71391 ']' 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.955 21:47:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.955 21:47:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:10.955 [2024-09-30 21:47:55.632052] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:10.955 [2024-09-30 21:47:55.632175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71391 ] 00:06:11.212 [2024-09-30 21:47:55.922926] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.212 [2024-09-30 21:47:55.943658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.212 [2024-09-30 21:47:55.959757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.777 21:47:56 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.777 21:47:56 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:11.777 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.777 INFO: shutting down applications... 00:06:11.777 21:47:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.777 21:47:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71391 ]] 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71391 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71391 00:06:11.777 21:47:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:12.343 21:47:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:12.343 21:47:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:12.343 21:47:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71391 00:06:12.343 21:47:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:12.343 21:47:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:12.343 SPDK target shutdown done 00:06:12.343 21:47:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:12.343 21:47:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:12.343 Success 00:06:12.343 21:47:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:12.343 00:06:12.343 real 0m1.544s 00:06:12.343 user 0m1.217s 00:06:12.343 sys 0m0.355s 00:06:12.343 21:47:56 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.343 ************************************ 00:06:12.343 END TEST json_config_extra_key 00:06:12.343 ************************************ 00:06:12.343 21:47:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:12.343 21:47:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.343 21:47:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.343 21:47:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.343 21:47:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.343 ************************************ 00:06:12.343 START TEST alias_rpc 00:06:12.343 ************************************ 00:06:12.343 21:47:57 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:12.343 * Looking for test storage... 00:06:12.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:12.343 21:47:57 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:12.343 21:47:57 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:12.343 21:47:57 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.600 21:47:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:12.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.600 --rc genhtml_branch_coverage=1 00:06:12.600 --rc genhtml_function_coverage=1 00:06:12.600 --rc genhtml_legend=1 00:06:12.600 --rc geninfo_all_blocks=1 00:06:12.600 --rc geninfo_unexecuted_blocks=1 00:06:12.600 00:06:12.600 ' 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:12.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.600 --rc genhtml_branch_coverage=1 00:06:12.600 --rc genhtml_function_coverage=1 00:06:12.600 --rc genhtml_legend=1 00:06:12.600 --rc geninfo_all_blocks=1 00:06:12.600 --rc geninfo_unexecuted_blocks=1 00:06:12.600 00:06:12.600 ' 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:12.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.600 --rc genhtml_branch_coverage=1 00:06:12.600 --rc genhtml_function_coverage=1 00:06:12.600 --rc genhtml_legend=1 00:06:12.600 --rc geninfo_all_blocks=1 00:06:12.600 --rc geninfo_unexecuted_blocks=1 00:06:12.600 00:06:12.600 ' 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:12.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.600 --rc genhtml_branch_coverage=1 00:06:12.600 --rc genhtml_function_coverage=1 00:06:12.600 --rc genhtml_legend=1 00:06:12.600 --rc geninfo_all_blocks=1 00:06:12.600 --rc geninfo_unexecuted_blocks=1 00:06:12.600 00:06:12.600 ' 00:06:12.600 21:47:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:12.600 21:47:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71465 00:06:12.600 21:47:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71465 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 71465 ']' 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.600 21:47:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.600 21:47:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.600 [2024-09-30 21:47:57.240959] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:12.600 [2024-09-30 21:47:57.241082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71465 ] 00:06:12.600 [2024-09-30 21:47:57.369275] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.600 [2024-09-30 21:47:57.381994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.857 [2024-09-30 21:47:57.414714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.420 21:47:58 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.420 21:47:58 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.420 21:47:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:13.678 21:47:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71465 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 71465 ']' 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 71465 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71465 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.678 killing process with pid 71465 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71465' 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@969 -- # kill 71465 00:06:13.678 21:47:58 alias_rpc -- common/autotest_common.sh@974 -- # wait 71465 00:06:13.937 00:06:13.937 real 0m1.542s 00:06:13.937 user 0m1.638s 00:06:13.937 sys 0m0.384s 00:06:13.937 ************************************ 00:06:13.937 END TEST alias_rpc 00:06:13.937 ************************************ 00:06:13.937 21:47:58 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.937 21:47:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.937 21:47:58 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:13.937 21:47:58 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.937 21:47:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.937 21:47:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.937 21:47:58 -- common/autotest_common.sh@10 -- # set +x 00:06:13.937 ************************************ 00:06:13.937 START TEST spdkcli_tcp 00:06:13.937 ************************************ 00:06:13.937 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:13.937 * Looking for test storage... 00:06:13.937 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:13.937 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.937 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:13.937 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:14.198 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.198 21:47:58 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.198 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.198 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.198 --rc genhtml_branch_coverage=1 00:06:14.198 --rc genhtml_function_coverage=1 00:06:14.198 --rc genhtml_legend=1 00:06:14.198 --rc geninfo_all_blocks=1 00:06:14.198 --rc geninfo_unexecuted_blocks=1 00:06:14.198 00:06:14.198 ' 00:06:14.198 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.198 --rc genhtml_branch_coverage=1 00:06:14.198 --rc genhtml_function_coverage=1 00:06:14.198 --rc genhtml_legend=1 00:06:14.198 --rc geninfo_all_blocks=1 00:06:14.198 --rc geninfo_unexecuted_blocks=1 00:06:14.198 00:06:14.198 ' 00:06:14.198 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.198 --rc genhtml_branch_coverage=1 00:06:14.198 --rc genhtml_function_coverage=1 00:06:14.198 --rc genhtml_legend=1 00:06:14.198 --rc geninfo_all_blocks=1 00:06:14.198 --rc geninfo_unexecuted_blocks=1 00:06:14.198 00:06:14.198 ' 00:06:14.198 21:47:58 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.198 --rc genhtml_branch_coverage=1 00:06:14.198 --rc genhtml_function_coverage=1 00:06:14.198 --rc genhtml_legend=1 00:06:14.198 --rc geninfo_all_blocks=1 00:06:14.198 --rc geninfo_unexecuted_blocks=1 00:06:14.198 00:06:14.198 ' 00:06:14.198 21:47:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:14.198 21:47:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:14.198 21:47:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:14.198 21:47:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:14.198 21:47:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:14.198 21:47:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:14.198 21:47:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:14.199 21:47:58 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.199 21:47:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.199 21:47:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71544 00:06:14.199 21:47:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71544 00:06:14.199 21:47:58 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 71544 ']' 00:06:14.199 21:47:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.199 21:47:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.199 21:47:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.199 21:47:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.199 21:47:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.199 21:47:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:14.199 [2024-09-30 21:47:58.848807] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:14.199 [2024-09-30 21:47:58.848956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71544 ] 00:06:14.199 [2024-09-30 21:47:58.977343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.199 [2024-09-30 21:47:58.991404] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.457 [2024-09-30 21:47:59.024270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.457 [2024-09-30 21:47:59.024304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.022 21:47:59 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.022 21:47:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:15.022 21:47:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71561 00:06:15.022 21:47:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:15.022 21:47:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:15.281 [ 00:06:15.281 "bdev_malloc_delete", 00:06:15.281 "bdev_malloc_create", 00:06:15.281 "bdev_null_resize", 00:06:15.281 "bdev_null_delete", 00:06:15.281 "bdev_null_create", 00:06:15.281 "bdev_nvme_cuse_unregister", 00:06:15.281 "bdev_nvme_cuse_register", 00:06:15.281 "bdev_opal_new_user", 00:06:15.281 "bdev_opal_set_lock_state", 00:06:15.281 "bdev_opal_delete", 00:06:15.281 "bdev_opal_get_info", 00:06:15.281 "bdev_opal_create", 00:06:15.281 "bdev_nvme_opal_revert", 00:06:15.281 "bdev_nvme_opal_init", 00:06:15.281 "bdev_nvme_send_cmd", 00:06:15.281 "bdev_nvme_set_keys", 00:06:15.281 "bdev_nvme_get_path_iostat", 00:06:15.281 "bdev_nvme_get_mdns_discovery_info", 00:06:15.281 "bdev_nvme_stop_mdns_discovery", 00:06:15.281 "bdev_nvme_start_mdns_discovery", 00:06:15.281 "bdev_nvme_set_multipath_policy", 00:06:15.281 "bdev_nvme_set_preferred_path", 00:06:15.281 "bdev_nvme_get_io_paths", 00:06:15.281 "bdev_nvme_remove_error_injection", 00:06:15.281 "bdev_nvme_add_error_injection", 00:06:15.281 "bdev_nvme_get_discovery_info", 00:06:15.281 "bdev_nvme_stop_discovery", 00:06:15.281 "bdev_nvme_start_discovery", 00:06:15.281 "bdev_nvme_get_controller_health_info", 00:06:15.281 "bdev_nvme_disable_controller", 00:06:15.281 "bdev_nvme_enable_controller", 00:06:15.281 "bdev_nvme_reset_controller", 00:06:15.281 "bdev_nvme_get_transport_statistics", 00:06:15.281 "bdev_nvme_apply_firmware", 00:06:15.281 "bdev_nvme_detach_controller", 00:06:15.281 "bdev_nvme_get_controllers", 00:06:15.281 "bdev_nvme_attach_controller", 00:06:15.281 "bdev_nvme_set_hotplug", 00:06:15.281 "bdev_nvme_set_options", 00:06:15.281 "bdev_passthru_delete", 00:06:15.281 "bdev_passthru_create", 00:06:15.281 "bdev_lvol_set_parent_bdev", 00:06:15.281 "bdev_lvol_set_parent", 00:06:15.281 "bdev_lvol_check_shallow_copy", 00:06:15.281 "bdev_lvol_start_shallow_copy", 00:06:15.281 "bdev_lvol_grow_lvstore", 00:06:15.281 "bdev_lvol_get_lvols", 00:06:15.281 "bdev_lvol_get_lvstores", 00:06:15.281 "bdev_lvol_delete", 00:06:15.281 "bdev_lvol_set_read_only", 00:06:15.281 "bdev_lvol_resize", 00:06:15.281 "bdev_lvol_decouple_parent", 00:06:15.281 "bdev_lvol_inflate", 00:06:15.281 "bdev_lvol_rename", 00:06:15.281 "bdev_lvol_clone_bdev", 00:06:15.281 "bdev_lvol_clone", 00:06:15.281 "bdev_lvol_snapshot", 00:06:15.281 "bdev_lvol_create", 00:06:15.281 "bdev_lvol_delete_lvstore", 00:06:15.281 "bdev_lvol_rename_lvstore", 00:06:15.281 "bdev_lvol_create_lvstore", 00:06:15.281 "bdev_raid_set_options", 00:06:15.281 "bdev_raid_remove_base_bdev", 00:06:15.281 "bdev_raid_add_base_bdev", 00:06:15.281 "bdev_raid_delete", 00:06:15.281 "bdev_raid_create", 00:06:15.281 "bdev_raid_get_bdevs", 00:06:15.281 "bdev_error_inject_error", 00:06:15.281 "bdev_error_delete", 00:06:15.281 "bdev_error_create", 00:06:15.281 "bdev_split_delete", 00:06:15.281 "bdev_split_create", 00:06:15.281 "bdev_delay_delete", 00:06:15.281 "bdev_delay_create", 00:06:15.281 "bdev_delay_update_latency", 00:06:15.281 "bdev_zone_block_delete", 00:06:15.281 "bdev_zone_block_create", 00:06:15.281 "blobfs_create", 00:06:15.281 "blobfs_detect", 00:06:15.281 "blobfs_set_cache_size", 00:06:15.281 "bdev_xnvme_delete", 00:06:15.281 "bdev_xnvme_create", 00:06:15.281 "bdev_aio_delete", 00:06:15.281 "bdev_aio_rescan", 00:06:15.281 "bdev_aio_create", 00:06:15.281 "bdev_ftl_set_property", 00:06:15.281 "bdev_ftl_get_properties", 00:06:15.281 "bdev_ftl_get_stats", 00:06:15.281 "bdev_ftl_unmap", 00:06:15.281 "bdev_ftl_unload", 00:06:15.281 "bdev_ftl_delete", 00:06:15.281 "bdev_ftl_load", 00:06:15.281 "bdev_ftl_create", 00:06:15.281 "bdev_virtio_attach_controller", 00:06:15.281 "bdev_virtio_scsi_get_devices", 00:06:15.281 "bdev_virtio_detach_controller", 00:06:15.281 "bdev_virtio_blk_set_hotplug", 00:06:15.281 "bdev_iscsi_delete", 00:06:15.281 "bdev_iscsi_create", 00:06:15.281 "bdev_iscsi_set_options", 00:06:15.281 "accel_error_inject_error", 00:06:15.281 "ioat_scan_accel_module", 00:06:15.281 "dsa_scan_accel_module", 00:06:15.281 "iaa_scan_accel_module", 00:06:15.281 "keyring_file_remove_key", 00:06:15.281 "keyring_file_add_key", 00:06:15.281 "keyring_linux_set_options", 00:06:15.281 "fsdev_aio_delete", 00:06:15.281 "fsdev_aio_create", 00:06:15.281 "iscsi_get_histogram", 00:06:15.281 "iscsi_enable_histogram", 00:06:15.281 "iscsi_set_options", 00:06:15.281 "iscsi_get_auth_groups", 00:06:15.281 "iscsi_auth_group_remove_secret", 00:06:15.281 "iscsi_auth_group_add_secret", 00:06:15.281 "iscsi_delete_auth_group", 00:06:15.281 "iscsi_create_auth_group", 00:06:15.281 "iscsi_set_discovery_auth", 00:06:15.281 "iscsi_get_options", 00:06:15.282 "iscsi_target_node_request_logout", 00:06:15.282 "iscsi_target_node_set_redirect", 00:06:15.282 "iscsi_target_node_set_auth", 00:06:15.282 "iscsi_target_node_add_lun", 00:06:15.282 "iscsi_get_stats", 00:06:15.282 "iscsi_get_connections", 00:06:15.282 "iscsi_portal_group_set_auth", 00:06:15.282 "iscsi_start_portal_group", 00:06:15.282 "iscsi_delete_portal_group", 00:06:15.282 "iscsi_create_portal_group", 00:06:15.282 "iscsi_get_portal_groups", 00:06:15.282 "iscsi_delete_target_node", 00:06:15.282 "iscsi_target_node_remove_pg_ig_maps", 00:06:15.282 "iscsi_target_node_add_pg_ig_maps", 00:06:15.282 "iscsi_create_target_node", 00:06:15.282 "iscsi_get_target_nodes", 00:06:15.282 "iscsi_delete_initiator_group", 00:06:15.282 "iscsi_initiator_group_remove_initiators", 00:06:15.282 "iscsi_initiator_group_add_initiators", 00:06:15.282 "iscsi_create_initiator_group", 00:06:15.282 "iscsi_get_initiator_groups", 00:06:15.282 "nvmf_set_crdt", 00:06:15.282 "nvmf_set_config", 00:06:15.282 "nvmf_set_max_subsystems", 00:06:15.282 "nvmf_stop_mdns_prr", 00:06:15.282 "nvmf_publish_mdns_prr", 00:06:15.282 "nvmf_subsystem_get_listeners", 00:06:15.282 "nvmf_subsystem_get_qpairs", 00:06:15.282 "nvmf_subsystem_get_controllers", 00:06:15.282 "nvmf_get_stats", 00:06:15.282 "nvmf_get_transports", 00:06:15.282 "nvmf_create_transport", 00:06:15.282 "nvmf_get_targets", 00:06:15.282 "nvmf_delete_target", 00:06:15.282 "nvmf_create_target", 00:06:15.282 "nvmf_subsystem_allow_any_host", 00:06:15.282 "nvmf_subsystem_set_keys", 00:06:15.282 "nvmf_subsystem_remove_host", 00:06:15.282 "nvmf_subsystem_add_host", 00:06:15.282 "nvmf_ns_remove_host", 00:06:15.282 "nvmf_ns_add_host", 00:06:15.282 "nvmf_subsystem_remove_ns", 00:06:15.282 "nvmf_subsystem_set_ns_ana_group", 00:06:15.282 "nvmf_subsystem_add_ns", 00:06:15.282 "nvmf_subsystem_listener_set_ana_state", 00:06:15.282 "nvmf_discovery_get_referrals", 00:06:15.282 "nvmf_discovery_remove_referral", 00:06:15.282 "nvmf_discovery_add_referral", 00:06:15.282 "nvmf_subsystem_remove_listener", 00:06:15.282 "nvmf_subsystem_add_listener", 00:06:15.282 "nvmf_delete_subsystem", 00:06:15.282 "nvmf_create_subsystem", 00:06:15.282 "nvmf_get_subsystems", 00:06:15.282 "env_dpdk_get_mem_stats", 00:06:15.282 "nbd_get_disks", 00:06:15.282 "nbd_stop_disk", 00:06:15.282 "nbd_start_disk", 00:06:15.282 "ublk_recover_disk", 00:06:15.282 "ublk_get_disks", 00:06:15.282 "ublk_stop_disk", 00:06:15.282 "ublk_start_disk", 00:06:15.282 "ublk_destroy_target", 00:06:15.282 "ublk_create_target", 00:06:15.282 "virtio_blk_create_transport", 00:06:15.282 "virtio_blk_get_transports", 00:06:15.282 "vhost_controller_set_coalescing", 00:06:15.282 "vhost_get_controllers", 00:06:15.282 "vhost_delete_controller", 00:06:15.282 "vhost_create_blk_controller", 00:06:15.282 "vhost_scsi_controller_remove_target", 00:06:15.282 "vhost_scsi_controller_add_target", 00:06:15.282 "vhost_start_scsi_controller", 00:06:15.282 "vhost_create_scsi_controller", 00:06:15.282 "thread_set_cpumask", 00:06:15.282 "scheduler_set_options", 00:06:15.282 "framework_get_governor", 00:06:15.282 "framework_get_scheduler", 00:06:15.282 "framework_set_scheduler", 00:06:15.282 "framework_get_reactors", 00:06:15.282 "thread_get_io_channels", 00:06:15.282 "thread_get_pollers", 00:06:15.282 "thread_get_stats", 00:06:15.282 "framework_monitor_context_switch", 00:06:15.282 "spdk_kill_instance", 00:06:15.282 "log_enable_timestamps", 00:06:15.282 "log_get_flags", 00:06:15.282 "log_clear_flag", 00:06:15.282 "log_set_flag", 00:06:15.282 "log_get_level", 00:06:15.282 "log_set_level", 00:06:15.282 "log_get_print_level", 00:06:15.282 "log_set_print_level", 00:06:15.282 "framework_enable_cpumask_locks", 00:06:15.282 "framework_disable_cpumask_locks", 00:06:15.282 "framework_wait_init", 00:06:15.282 "framework_start_init", 00:06:15.282 "scsi_get_devices", 00:06:15.282 "bdev_get_histogram", 00:06:15.282 "bdev_enable_histogram", 00:06:15.282 "bdev_set_qos_limit", 00:06:15.282 "bdev_set_qd_sampling_period", 00:06:15.282 "bdev_get_bdevs", 00:06:15.282 "bdev_reset_iostat", 00:06:15.282 "bdev_get_iostat", 00:06:15.282 "bdev_examine", 00:06:15.282 "bdev_wait_for_examine", 00:06:15.282 "bdev_set_options", 00:06:15.282 "accel_get_stats", 00:06:15.282 "accel_set_options", 00:06:15.282 "accel_set_driver", 00:06:15.282 "accel_crypto_key_destroy", 00:06:15.282 "accel_crypto_keys_get", 00:06:15.282 "accel_crypto_key_create", 00:06:15.282 "accel_assign_opc", 00:06:15.282 "accel_get_module_info", 00:06:15.282 "accel_get_opc_assignments", 00:06:15.282 "vmd_rescan", 00:06:15.282 "vmd_remove_device", 00:06:15.282 "vmd_enable", 00:06:15.282 "sock_get_default_impl", 00:06:15.282 "sock_set_default_impl", 00:06:15.282 "sock_impl_set_options", 00:06:15.282 "sock_impl_get_options", 00:06:15.282 "iobuf_get_stats", 00:06:15.282 "iobuf_set_options", 00:06:15.282 "keyring_get_keys", 00:06:15.282 "framework_get_pci_devices", 00:06:15.282 "framework_get_config", 00:06:15.282 "framework_get_subsystems", 00:06:15.282 "fsdev_set_opts", 00:06:15.282 "fsdev_get_opts", 00:06:15.282 "trace_get_info", 00:06:15.282 "trace_get_tpoint_group_mask", 00:06:15.282 "trace_disable_tpoint_group", 00:06:15.282 "trace_enable_tpoint_group", 00:06:15.282 "trace_clear_tpoint_mask", 00:06:15.282 "trace_set_tpoint_mask", 00:06:15.282 "notify_get_notifications", 00:06:15.282 "notify_get_types", 00:06:15.282 "spdk_get_version", 00:06:15.282 "rpc_get_methods" 00:06:15.282 ] 00:06:15.282 21:47:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.282 21:47:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:15.282 21:47:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71544 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 71544 ']' 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 71544 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71544 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.282 killing process with pid 71544 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71544' 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 71544 00:06:15.282 21:47:59 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 71544 00:06:15.540 00:06:15.540 real 0m1.589s 00:06:15.540 user 0m2.832s 00:06:15.540 sys 0m0.389s 00:06:15.540 21:48:00 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.540 ************************************ 00:06:15.540 END TEST spdkcli_tcp 00:06:15.540 ************************************ 00:06:15.540 21:48:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:15.540 21:48:00 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.540 21:48:00 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.540 21:48:00 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.540 21:48:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.540 ************************************ 00:06:15.540 START TEST dpdk_mem_utility 00:06:15.540 ************************************ 00:06:15.540 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.540 * Looking for test storage... 00:06:15.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.798 21:48:00 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.798 --rc genhtml_branch_coverage=1 00:06:15.798 --rc genhtml_function_coverage=1 00:06:15.798 --rc genhtml_legend=1 00:06:15.798 --rc geninfo_all_blocks=1 00:06:15.798 --rc geninfo_unexecuted_blocks=1 00:06:15.798 00:06:15.798 ' 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.798 --rc genhtml_branch_coverage=1 00:06:15.798 --rc genhtml_function_coverage=1 00:06:15.798 --rc genhtml_legend=1 00:06:15.798 --rc geninfo_all_blocks=1 00:06:15.798 --rc geninfo_unexecuted_blocks=1 00:06:15.798 00:06:15.798 ' 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.798 --rc genhtml_branch_coverage=1 00:06:15.798 --rc genhtml_function_coverage=1 00:06:15.798 --rc genhtml_legend=1 00:06:15.798 --rc geninfo_all_blocks=1 00:06:15.798 --rc geninfo_unexecuted_blocks=1 00:06:15.798 00:06:15.798 ' 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.798 --rc genhtml_branch_coverage=1 00:06:15.798 --rc genhtml_function_coverage=1 00:06:15.798 --rc genhtml_legend=1 00:06:15.798 --rc geninfo_all_blocks=1 00:06:15.798 --rc geninfo_unexecuted_blocks=1 00:06:15.798 00:06:15.798 ' 00:06:15.798 21:48:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:15.798 21:48:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71639 00:06:15.798 21:48:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71639 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 71639 ']' 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.798 21:48:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.798 21:48:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.798 [2024-09-30 21:48:00.505930] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:15.798 [2024-09-30 21:48:00.506051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71639 ] 00:06:16.056 [2024-09-30 21:48:00.634653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:16.056 [2024-09-30 21:48:00.654173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.056 [2024-09-30 21:48:00.694620] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.622 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.622 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:16.622 21:48:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:16.622 21:48:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:16.622 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:16.622 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.622 { 00:06:16.622 "filename": "/tmp/spdk_mem_dump.txt" 00:06:16.622 } 00:06:16.622 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:16.622 21:48:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:16.622 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:16.622 1 heaps totaling size 860.000000 MiB 00:06:16.622 size: 860.000000 MiB heap id: 0 00:06:16.622 end heaps---------- 00:06:16.622 9 mempools totaling size 642.649841 MiB 00:06:16.622 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:16.622 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:16.622 size: 92.545471 MiB name: bdev_io_71639 00:06:16.622 size: 51.011292 MiB name: evtpool_71639 00:06:16.622 size: 50.003479 MiB name: msgpool_71639 00:06:16.622 size: 36.509338 MiB name: fsdev_io_71639 00:06:16.622 size: 21.763794 MiB name: PDU_Pool 00:06:16.622 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:16.622 size: 0.026123 MiB name: Session_Pool 00:06:16.622 end mempools------- 00:06:16.622 6 memzones totaling size 4.142822 MiB 00:06:16.622 size: 1.000366 MiB name: RG_ring_0_71639 00:06:16.622 size: 1.000366 MiB name: RG_ring_1_71639 00:06:16.622 size: 1.000366 MiB name: RG_ring_4_71639 00:06:16.622 size: 1.000366 MiB name: RG_ring_5_71639 00:06:16.622 size: 0.125366 MiB name: RG_ring_2_71639 00:06:16.622 size: 0.015991 MiB name: RG_ring_3_71639 00:06:16.622 end memzones------- 00:06:16.622 21:48:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:16.880 heap id: 0 total size: 860.000000 MiB number of busy elements: 303 number of free elements: 16 00:06:16.880 list of free elements. size: 13.937256 MiB 00:06:16.880 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:16.880 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:16.880 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:16.880 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:16.880 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:16.880 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:16.880 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:16.880 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:16.880 element at address: 0x200000200000 with size: 0.834839 MiB 00:06:16.880 element at address: 0x20001d800000 with size: 0.568237 MiB 00:06:16.880 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:16.880 element at address: 0x200003e00000 with size: 0.488647 MiB 00:06:16.880 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:16.880 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:16.880 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:16.880 element at address: 0x200003a00000 with size: 0.353027 MiB 00:06:16.880 list of standard malloc elements. size: 199.266052 MiB 00:06:16.880 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:16.880 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:16.880 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:16.880 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:16.880 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:16.880 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:16.880 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:16.880 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:16.880 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:16.880 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:16.880 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a5a600 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a5eac0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:16.881 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:16.881 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:16.882 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:16.882 list of memzone associated elements. size: 646.796692 MiB 00:06:16.882 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:16.882 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:16.882 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:16.882 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:16.882 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:16.882 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71639_0 00:06:16.882 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:16.882 associated memzone info: size: 48.002930 MiB name: MP_evtpool_71639_0 00:06:16.882 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:16.882 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71639_0 00:06:16.882 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:16.882 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71639_0 00:06:16.882 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:16.882 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:16.882 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:16.882 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:16.882 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:16.882 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_71639 00:06:16.882 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:16.882 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71639 00:06:16.882 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:16.882 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71639 00:06:16.882 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:16.882 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:16.882 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:16.882 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:16.882 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:16.882 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:16.882 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:16.882 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:16.882 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:16.882 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71639 00:06:16.882 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:16.882 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71639 00:06:16.882 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:16.882 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71639 00:06:16.882 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:16.882 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71639 00:06:16.882 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:16.882 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71639 00:06:16.882 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:16.882 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71639 00:06:16.882 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:16.882 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:16.882 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:16.882 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:16.882 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:16.882 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:16.882 element at address: 0x200003a5eb80 with size: 0.125488 MiB 00:06:16.882 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71639 00:06:16.882 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:16.882 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:16.882 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:16.882 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:16.882 element at address: 0x200003a5a8c0 with size: 0.016113 MiB 00:06:16.882 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71639 00:06:16.882 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:16.882 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:16.882 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:16.882 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71639 00:06:16.882 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:16.882 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71639 00:06:16.882 element at address: 0x200003a5a6c0 with size: 0.000305 MiB 00:06:16.882 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71639 00:06:16.882 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:16.882 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:16.882 21:48:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:16.882 21:48:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71639 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 71639 ']' 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 71639 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71639 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.882 killing process with pid 71639 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71639' 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 71639 00:06:16.882 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 71639 00:06:17.138 00:06:17.138 real 0m1.460s 00:06:17.138 user 0m1.495s 00:06:17.138 sys 0m0.372s 00:06:17.138 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.138 ************************************ 00:06:17.138 END TEST dpdk_mem_utility 00:06:17.138 ************************************ 00:06:17.138 21:48:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:17.138 21:48:01 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.138 21:48:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.138 21:48:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.138 21:48:01 -- common/autotest_common.sh@10 -- # set +x 00:06:17.138 ************************************ 00:06:17.138 START TEST event 00:06:17.138 ************************************ 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:17.138 * Looking for test storage... 00:06:17.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:17.138 21:48:01 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.138 21:48:01 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.138 21:48:01 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.138 21:48:01 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.138 21:48:01 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.138 21:48:01 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.138 21:48:01 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.138 21:48:01 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.138 21:48:01 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.138 21:48:01 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.138 21:48:01 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.138 21:48:01 event -- scripts/common.sh@344 -- # case "$op" in 00:06:17.138 21:48:01 event -- scripts/common.sh@345 -- # : 1 00:06:17.138 21:48:01 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.138 21:48:01 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.138 21:48:01 event -- scripts/common.sh@365 -- # decimal 1 00:06:17.138 21:48:01 event -- scripts/common.sh@353 -- # local d=1 00:06:17.138 21:48:01 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.138 21:48:01 event -- scripts/common.sh@355 -- # echo 1 00:06:17.138 21:48:01 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.138 21:48:01 event -- scripts/common.sh@366 -- # decimal 2 00:06:17.138 21:48:01 event -- scripts/common.sh@353 -- # local d=2 00:06:17.138 21:48:01 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.138 21:48:01 event -- scripts/common.sh@355 -- # echo 2 00:06:17.138 21:48:01 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.138 21:48:01 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.138 21:48:01 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.138 21:48:01 event -- scripts/common.sh@368 -- # return 0 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:17.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.138 --rc genhtml_branch_coverage=1 00:06:17.138 --rc genhtml_function_coverage=1 00:06:17.138 --rc genhtml_legend=1 00:06:17.138 --rc geninfo_all_blocks=1 00:06:17.138 --rc geninfo_unexecuted_blocks=1 00:06:17.138 00:06:17.138 ' 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:17.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.138 --rc genhtml_branch_coverage=1 00:06:17.138 --rc genhtml_function_coverage=1 00:06:17.138 --rc genhtml_legend=1 00:06:17.138 --rc geninfo_all_blocks=1 00:06:17.138 --rc geninfo_unexecuted_blocks=1 00:06:17.138 00:06:17.138 ' 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:17.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.138 --rc genhtml_branch_coverage=1 00:06:17.138 --rc genhtml_function_coverage=1 00:06:17.138 --rc genhtml_legend=1 00:06:17.138 --rc geninfo_all_blocks=1 00:06:17.138 --rc geninfo_unexecuted_blocks=1 00:06:17.138 00:06:17.138 ' 00:06:17.138 21:48:01 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:17.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.138 --rc genhtml_branch_coverage=1 00:06:17.138 --rc genhtml_function_coverage=1 00:06:17.138 --rc genhtml_legend=1 00:06:17.138 --rc geninfo_all_blocks=1 00:06:17.138 --rc geninfo_unexecuted_blocks=1 00:06:17.138 00:06:17.138 ' 00:06:17.139 21:48:01 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:17.139 21:48:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:17.139 21:48:01 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.139 21:48:01 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:17.139 21:48:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.139 21:48:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.395 ************************************ 00:06:17.395 START TEST event_perf 00:06:17.395 ************************************ 00:06:17.395 21:48:01 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:17.395 Running I/O for 1 seconds...[2024-09-30 21:48:01.989964] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:17.395 [2024-09-30 21:48:01.990082] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71719 ] 00:06:17.395 [2024-09-30 21:48:02.119324] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.395 [2024-09-30 21:48:02.136695] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:17.395 [2024-09-30 21:48:02.172866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.395 [2024-09-30 21:48:02.173079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.395 [2024-09-30 21:48:02.173651] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:17.395 [2024-09-30 21:48:02.173707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.769 Running I/O for 1 seconds... 00:06:18.769 lcore 0: 185186 00:06:18.769 lcore 1: 185187 00:06:18.769 lcore 2: 185185 00:06:18.769 lcore 3: 185186 00:06:18.769 done. 00:06:18.769 00:06:18.769 real 0m1.275s 00:06:18.769 user 0m4.072s 00:06:18.769 sys 0m0.087s 00:06:18.769 21:48:03 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.769 ************************************ 00:06:18.769 END TEST event_perf 00:06:18.769 ************************************ 00:06:18.769 21:48:03 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.769 21:48:03 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.769 21:48:03 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:18.769 21:48:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.769 21:48:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.769 ************************************ 00:06:18.769 START TEST event_reactor 00:06:18.769 ************************************ 00:06:18.769 21:48:03 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:18.769 [2024-09-30 21:48:03.317915] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:18.769 [2024-09-30 21:48:03.318024] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71759 ] 00:06:18.769 [2024-09-30 21:48:03.444758] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.769 [2024-09-30 21:48:03.465394] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.769 [2024-09-30 21:48:03.495977] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.143 test_start 00:06:20.143 oneshot 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 tick 250 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 tick 250 00:06:20.144 tick 500 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 tick 250 00:06:20.144 tick 100 00:06:20.144 tick 100 00:06:20.144 test_end 00:06:20.144 00:06:20.144 real 0m1.267s 00:06:20.144 user 0m1.088s 00:06:20.144 sys 0m0.072s 00:06:20.144 21:48:04 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.144 ************************************ 00:06:20.144 END TEST event_reactor 00:06:20.144 ************************************ 00:06:20.144 21:48:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:20.144 21:48:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.144 21:48:04 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:20.144 21:48:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.144 21:48:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.144 ************************************ 00:06:20.144 START TEST event_reactor_perf 00:06:20.144 ************************************ 00:06:20.144 21:48:04 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:20.144 [2024-09-30 21:48:04.650446] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:20.144 [2024-09-30 21:48:04.650552] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71790 ] 00:06:20.144 [2024-09-30 21:48:04.777831] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.144 [2024-09-30 21:48:04.795988] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.144 [2024-09-30 21:48:04.829006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.078 test_start 00:06:21.078 test_end 00:06:21.078 Performance: 312591 events per second 00:06:21.336 00:06:21.336 real 0m1.267s 00:06:21.336 user 0m1.085s 00:06:21.336 sys 0m0.075s 00:06:21.337 21:48:05 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.337 ************************************ 00:06:21.337 END TEST event_reactor_perf 00:06:21.337 ************************************ 00:06:21.337 21:48:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.337 21:48:05 event -- event/event.sh@49 -- # uname -s 00:06:21.337 21:48:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:21.337 21:48:05 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.337 21:48:05 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.337 21:48:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.337 21:48:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.337 ************************************ 00:06:21.337 START TEST event_scheduler 00:06:21.337 ************************************ 00:06:21.337 21:48:05 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:21.337 * Looking for test storage... 00:06:21.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.337 21:48:06 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.337 --rc genhtml_branch_coverage=1 00:06:21.337 --rc genhtml_function_coverage=1 00:06:21.337 --rc genhtml_legend=1 00:06:21.337 --rc geninfo_all_blocks=1 00:06:21.337 --rc geninfo_unexecuted_blocks=1 00:06:21.337 00:06:21.337 ' 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.337 --rc genhtml_branch_coverage=1 00:06:21.337 --rc genhtml_function_coverage=1 00:06:21.337 --rc genhtml_legend=1 00:06:21.337 --rc geninfo_all_blocks=1 00:06:21.337 --rc geninfo_unexecuted_blocks=1 00:06:21.337 00:06:21.337 ' 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.337 --rc genhtml_branch_coverage=1 00:06:21.337 --rc genhtml_function_coverage=1 00:06:21.337 --rc genhtml_legend=1 00:06:21.337 --rc geninfo_all_blocks=1 00:06:21.337 --rc geninfo_unexecuted_blocks=1 00:06:21.337 00:06:21.337 ' 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.337 --rc genhtml_branch_coverage=1 00:06:21.337 --rc genhtml_function_coverage=1 00:06:21.337 --rc genhtml_legend=1 00:06:21.337 --rc geninfo_all_blocks=1 00:06:21.337 --rc geninfo_unexecuted_blocks=1 00:06:21.337 00:06:21.337 ' 00:06:21.337 21:48:06 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:21.337 21:48:06 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71860 00:06:21.337 21:48:06 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.337 21:48:06 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71860 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 71860 ']' 00:06:21.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.337 21:48:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.337 21:48:06 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:21.635 [2024-09-30 21:48:06.160434] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:21.635 [2024-09-30 21:48:06.160550] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71860 ] 00:06:21.635 [2024-09-30 21:48:06.290887] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.635 [2024-09-30 21:48:06.310271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:21.635 [2024-09-30 21:48:06.345761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.635 [2024-09-30 21:48:06.346120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.635 [2024-09-30 21:48:06.346353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.635 [2024-09-30 21:48:06.346431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.206 21:48:07 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.206 21:48:07 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:22.206 21:48:07 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:22.206 21:48:07 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.206 21:48:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.206 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.206 POWER: Cannot set governor of lcore 0 to performance 00:06:22.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.206 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.206 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:22.206 POWER: Cannot set governor of lcore 0 to userspace 00:06:22.206 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:22.206 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:22.206 POWER: Unable to set Power Management Environment for lcore 0 00:06:22.206 [2024-09-30 21:48:07.012147] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:22.206 [2024-09-30 21:48:07.012222] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:22.206 [2024-09-30 21:48:07.012280] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:22.206 [2024-09-30 21:48:07.012327] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:22.206 [2024-09-30 21:48:07.012382] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:22.206 [2024-09-30 21:48:07.012448] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:22.206 21:48:07 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.206 21:48:07 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:22.206 21:48:07 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.206 21:48:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.464 [2024-09-30 21:48:07.068847] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:22.464 21:48:07 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.464 21:48:07 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:22.464 21:48:07 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.464 21:48:07 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.464 21:48:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:22.464 ************************************ 00:06:22.464 START TEST scheduler_create_thread 00:06:22.464 ************************************ 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 2 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 3 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 4 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 5 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 6 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 7 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 8 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 9 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 10 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.465 21:48:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.407 21:48:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.407 21:48:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:23.407 21:48:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.407 21:48:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.794 21:48:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.794 21:48:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:24.794 21:48:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:24.794 21:48:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.794 21:48:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.760 21:48:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.760 00:06:25.760 real 0m3.374s 00:06:25.760 user 0m0.016s 00:06:25.760 sys 0m0.006s 00:06:25.760 21:48:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.760 21:48:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.760 ************************************ 00:06:25.760 END TEST scheduler_create_thread 00:06:25.760 ************************************ 00:06:25.760 21:48:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:25.760 21:48:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71860 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 71860 ']' 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 71860 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71860 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71860' 00:06:25.760 killing process with pid 71860 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 71860 00:06:25.760 21:48:10 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 71860 00:06:26.332 [2024-09-30 21:48:10.840650] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:26.332 00:06:26.332 real 0m5.071s 00:06:26.332 user 0m10.150s 00:06:26.332 sys 0m0.323s 00:06:26.332 21:48:11 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.332 ************************************ 00:06:26.332 END TEST event_scheduler 00:06:26.332 21:48:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:26.332 ************************************ 00:06:26.332 21:48:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:26.332 21:48:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:26.332 21:48:11 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.332 21:48:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.332 21:48:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.332 ************************************ 00:06:26.332 START TEST app_repeat 00:06:26.332 ************************************ 00:06:26.332 21:48:11 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71966 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.332 Process app_repeat pid: 71966 00:06:26.332 spdk_app_start Round 0 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71966' 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71966 /var/tmp/spdk-nbd.sock 00:06:26.332 21:48:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:26.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.332 21:48:11 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71966 ']' 00:06:26.332 21:48:11 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.332 21:48:11 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.332 21:48:11 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.332 21:48:11 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.332 21:48:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:26.332 [2024-09-30 21:48:11.133026] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:26.332 [2024-09-30 21:48:11.133141] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71966 ] 00:06:26.590 [2024-09-30 21:48:11.261685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.590 [2024-09-30 21:48:11.279972] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.590 [2024-09-30 21:48:11.309866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.590 [2024-09-30 21:48:11.309938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.528 21:48:11 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.528 21:48:11 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:27.528 21:48:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.528 Malloc0 00:06:27.528 21:48:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.789 Malloc1 00:06:27.789 21:48:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.789 21:48:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.050 /dev/nbd0 00:06:28.050 21:48:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.050 21:48:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.050 1+0 records in 00:06:28.050 1+0 records out 00:06:28.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210153 s, 19.5 MB/s 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.050 21:48:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:28.050 21:48:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.050 21:48:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.050 21:48:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.310 /dev/nbd1 00:06:28.310 21:48:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.310 21:48:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.310 1+0 records in 00:06:28.310 1+0 records out 00:06:28.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227943 s, 18.0 MB/s 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:28.310 21:48:12 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:28.310 21:48:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.310 21:48:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.310 21:48:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.310 21:48:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.310 21:48:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.310 21:48:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.310 { 00:06:28.310 "nbd_device": "/dev/nbd0", 00:06:28.310 "bdev_name": "Malloc0" 00:06:28.310 }, 00:06:28.310 { 00:06:28.310 "nbd_device": "/dev/nbd1", 00:06:28.310 "bdev_name": "Malloc1" 00:06:28.310 } 00:06:28.310 ]' 00:06:28.310 21:48:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.310 { 00:06:28.310 "nbd_device": "/dev/nbd0", 00:06:28.310 "bdev_name": "Malloc0" 00:06:28.310 }, 00:06:28.310 { 00:06:28.310 "nbd_device": "/dev/nbd1", 00:06:28.310 "bdev_name": "Malloc1" 00:06:28.310 } 00:06:28.310 ]' 00:06:28.310 21:48:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.570 /dev/nbd1' 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.570 /dev/nbd1' 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.570 256+0 records in 00:06:28.570 256+0 records out 00:06:28.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416389 s, 252 MB/s 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.570 256+0 records in 00:06:28.570 256+0 records out 00:06:28.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128949 s, 81.3 MB/s 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.570 256+0 records in 00:06:28.570 256+0 records out 00:06:28.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191044 s, 54.9 MB/s 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:28.570 21:48:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.571 21:48:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.571 21:48:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.571 21:48:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.571 21:48:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.571 21:48:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.571 21:48:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.829 21:48:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:29.090 21:48:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:29.090 21:48:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.351 21:48:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.610 [2024-09-30 21:48:14.181950] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.610 [2024-09-30 21:48:14.208757] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.610 [2024-09-30 21:48:14.208863] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.610 [2024-09-30 21:48:14.237718] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.610 [2024-09-30 21:48:14.237770] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.946 21:48:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.946 spdk_app_start Round 1 00:06:32.946 21:48:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:32.946 21:48:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71966 /var/tmp/spdk-nbd.sock 00:06:32.946 21:48:17 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71966 ']' 00:06:32.946 21:48:17 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.946 21:48:17 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:32.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.946 21:48:17 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.946 21:48:17 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:32.946 21:48:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.946 21:48:17 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:32.946 21:48:17 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:32.946 21:48:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.946 Malloc0 00:06:32.946 21:48:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.946 Malloc1 00:06:32.946 21:48:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.946 21:48:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:33.204 /dev/nbd0 00:06:33.204 21:48:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:33.204 21:48:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.204 1+0 records in 00:06:33.204 1+0 records out 00:06:33.204 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144261 s, 28.4 MB/s 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.204 21:48:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.204 21:48:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.204 21:48:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.204 21:48:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:33.463 /dev/nbd1 00:06:33.463 21:48:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.463 21:48:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.463 1+0 records in 00:06:33.463 1+0 records out 00:06:33.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387637 s, 10.6 MB/s 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.463 21:48:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.463 21:48:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.463 21:48:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.463 21:48:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.463 21:48:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.463 21:48:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.721 { 00:06:33.721 "nbd_device": "/dev/nbd0", 00:06:33.721 "bdev_name": "Malloc0" 00:06:33.721 }, 00:06:33.721 { 00:06:33.721 "nbd_device": "/dev/nbd1", 00:06:33.721 "bdev_name": "Malloc1" 00:06:33.721 } 00:06:33.721 ]' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.721 { 00:06:33.721 "nbd_device": "/dev/nbd0", 00:06:33.721 "bdev_name": "Malloc0" 00:06:33.721 }, 00:06:33.721 { 00:06:33.721 "nbd_device": "/dev/nbd1", 00:06:33.721 "bdev_name": "Malloc1" 00:06:33.721 } 00:06:33.721 ]' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.721 /dev/nbd1' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.721 /dev/nbd1' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.721 256+0 records in 00:06:33.721 256+0 records out 00:06:33.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00672872 s, 156 MB/s 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.721 256+0 records in 00:06:33.721 256+0 records out 00:06:33.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193499 s, 54.2 MB/s 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.721 256+0 records in 00:06:33.721 256+0 records out 00:06:33.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171992 s, 61.0 MB/s 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.721 21:48:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.979 21:48:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.240 21:48:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.498 21:48:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.498 21:48:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.756 21:48:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.756 [2024-09-30 21:48:19.413389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.756 [2024-09-30 21:48:19.441532] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.756 [2024-09-30 21:48:19.441696] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.756 [2024-09-30 21:48:19.471968] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.756 [2024-09-30 21:48:19.472019] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:38.042 spdk_app_start Round 2 00:06:38.042 21:48:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:38.042 21:48:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:38.042 21:48:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71966 /var/tmp/spdk-nbd.sock 00:06:38.042 21:48:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71966 ']' 00:06:38.042 21:48:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.042 21:48:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:38.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.042 21:48:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.042 21:48:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:38.042 21:48:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.042 21:48:22 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.042 21:48:22 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:38.042 21:48:22 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.042 Malloc0 00:06:38.042 21:48:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.301 Malloc1 00:06:38.301 21:48:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.301 21:48:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:38.561 /dev/nbd0 00:06:38.562 21:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:38.562 21:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.562 1+0 records in 00:06:38.562 1+0 records out 00:06:38.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343562 s, 11.9 MB/s 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.562 21:48:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:38.562 21:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.562 21:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.562 21:48:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:38.562 /dev/nbd1 00:06:38.822 21:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:38.822 21:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:38.822 1+0 records in 00:06:38.822 1+0 records out 00:06:38.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224251 s, 18.3 MB/s 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:38.822 21:48:23 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:38.822 21:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:38.822 21:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.822 21:48:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.822 21:48:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.822 21:48:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.081 { 00:06:39.081 "nbd_device": "/dev/nbd0", 00:06:39.081 "bdev_name": "Malloc0" 00:06:39.081 }, 00:06:39.081 { 00:06:39.081 "nbd_device": "/dev/nbd1", 00:06:39.081 "bdev_name": "Malloc1" 00:06:39.081 } 00:06:39.081 ]' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.081 { 00:06:39.081 "nbd_device": "/dev/nbd0", 00:06:39.081 "bdev_name": "Malloc0" 00:06:39.081 }, 00:06:39.081 { 00:06:39.081 "nbd_device": "/dev/nbd1", 00:06:39.081 "bdev_name": "Malloc1" 00:06:39.081 } 00:06:39.081 ]' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.081 /dev/nbd1' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.081 /dev/nbd1' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.081 256+0 records in 00:06:39.081 256+0 records out 00:06:39.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00796178 s, 132 MB/s 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.081 256+0 records in 00:06:39.081 256+0 records out 00:06:39.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225934 s, 46.4 MB/s 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.081 256+0 records in 00:06:39.081 256+0 records out 00:06:39.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181425 s, 57.8 MB/s 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.081 21:48:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.341 21:48:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.602 21:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.863 21:48:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.863 21:48:24 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.863 21:48:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:40.124 [2024-09-30 21:48:24.739008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.124 [2024-09-30 21:48:24.766068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.124 [2024-09-30 21:48:24.766074] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.124 [2024-09-30 21:48:24.796040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.124 [2024-09-30 21:48:24.796092] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:43.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.425 21:48:27 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71966 /var/tmp/spdk-nbd.sock 00:06:43.425 21:48:27 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 71966 ']' 00:06:43.425 21:48:27 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.425 21:48:27 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.425 21:48:27 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.425 21:48:27 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.425 21:48:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.425 21:48:27 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:43.426 21:48:27 event.app_repeat -- event/event.sh@39 -- # killprocess 71966 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 71966 ']' 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 71966 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71966 00:06:43.426 killing process with pid 71966 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71966' 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@969 -- # kill 71966 00:06:43.426 21:48:27 event.app_repeat -- common/autotest_common.sh@974 -- # wait 71966 00:06:43.426 spdk_app_start is called in Round 0. 00:06:43.426 Shutdown signal received, stop current app iteration 00:06:43.426 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:06:43.426 spdk_app_start is called in Round 1. 00:06:43.426 Shutdown signal received, stop current app iteration 00:06:43.426 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:06:43.426 spdk_app_start is called in Round 2. 00:06:43.426 Shutdown signal received, stop current app iteration 00:06:43.426 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 reinitialization... 00:06:43.426 spdk_app_start is called in Round 3. 00:06:43.426 Shutdown signal received, stop current app iteration 00:06:43.426 ************************************ 00:06:43.426 END TEST app_repeat 00:06:43.426 ************************************ 00:06:43.426 21:48:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:43.426 21:48:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:43.426 00:06:43.426 real 0m16.927s 00:06:43.426 user 0m37.795s 00:06:43.426 sys 0m2.077s 00:06:43.426 21:48:28 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.426 21:48:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.426 21:48:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:43.426 21:48:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.426 21:48:28 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.426 21:48:28 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.426 21:48:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.426 ************************************ 00:06:43.426 START TEST cpu_locks 00:06:43.426 ************************************ 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:43.426 * Looking for test storage... 00:06:43.426 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.426 21:48:28 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.426 --rc genhtml_branch_coverage=1 00:06:43.426 --rc genhtml_function_coverage=1 00:06:43.426 --rc genhtml_legend=1 00:06:43.426 --rc geninfo_all_blocks=1 00:06:43.426 --rc geninfo_unexecuted_blocks=1 00:06:43.426 00:06:43.426 ' 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.426 --rc genhtml_branch_coverage=1 00:06:43.426 --rc genhtml_function_coverage=1 00:06:43.426 --rc genhtml_legend=1 00:06:43.426 --rc geninfo_all_blocks=1 00:06:43.426 --rc geninfo_unexecuted_blocks=1 00:06:43.426 00:06:43.426 ' 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.426 --rc genhtml_branch_coverage=1 00:06:43.426 --rc genhtml_function_coverage=1 00:06:43.426 --rc genhtml_legend=1 00:06:43.426 --rc geninfo_all_blocks=1 00:06:43.426 --rc geninfo_unexecuted_blocks=1 00:06:43.426 00:06:43.426 ' 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:43.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.426 --rc genhtml_branch_coverage=1 00:06:43.426 --rc genhtml_function_coverage=1 00:06:43.426 --rc genhtml_legend=1 00:06:43.426 --rc geninfo_all_blocks=1 00:06:43.426 --rc geninfo_unexecuted_blocks=1 00:06:43.426 00:06:43.426 ' 00:06:43.426 21:48:28 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:43.426 21:48:28 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:43.426 21:48:28 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:43.426 21:48:28 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.426 21:48:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.688 ************************************ 00:06:43.688 START TEST default_locks 00:06:43.688 ************************************ 00:06:43.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72386 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72386 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72386 ']' 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.688 21:48:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.688 [2024-09-30 21:48:28.314918] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:43.688 [2024-09-30 21:48:28.315050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72386 ] 00:06:43.688 [2024-09-30 21:48:28.444157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.688 [2024-09-30 21:48:28.463316] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.688 [2024-09-30 21:48:28.494424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72386 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72386 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72386 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 72386 ']' 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 72386 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72386 00:06:44.657 killing process with pid 72386 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72386' 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 72386 00:06:44.657 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 72386 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72386 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72386 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:44.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 72386 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 72386 ']' 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.919 ERROR: process (pid: 72386) is no longer running 00:06:44.919 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72386) - No such process 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:44.919 00:06:44.919 real 0m1.413s 00:06:44.919 user 0m1.460s 00:06:44.919 sys 0m0.418s 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.919 21:48:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.919 ************************************ 00:06:44.919 END TEST default_locks 00:06:44.919 ************************************ 00:06:44.919 21:48:29 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:44.919 21:48:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.919 21:48:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.919 21:48:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.919 ************************************ 00:06:44.919 START TEST default_locks_via_rpc 00:06:44.919 ************************************ 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72433 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72433 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72433 ']' 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.919 21:48:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.178 [2024-09-30 21:48:29.762671] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:45.178 [2024-09-30 21:48:29.762780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72433 ] 00:06:45.178 [2024-09-30 21:48:29.885869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:45.178 [2024-09-30 21:48:29.904226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.178 [2024-09-30 21:48:29.936982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72433 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72433 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72433 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 72433 ']' 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 72433 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72433 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:46.114 killing process with pid 72433 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72433' 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 72433 00:06:46.114 21:48:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 72433 00:06:46.376 00:06:46.376 real 0m1.403s 00:06:46.376 user 0m1.457s 00:06:46.376 sys 0m0.405s 00:06:46.376 21:48:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.376 ************************************ 00:06:46.376 END TEST default_locks_via_rpc 00:06:46.376 ************************************ 00:06:46.376 21:48:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:46.376 21:48:31 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:46.376 21:48:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.376 21:48:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.376 21:48:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.376 ************************************ 00:06:46.376 START TEST non_locking_app_on_locked_coremask 00:06:46.376 ************************************ 00:06:46.376 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:46.376 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72480 00:06:46.376 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72480 /var/tmp/spdk.sock 00:06:46.376 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72480 ']' 00:06:46.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.376 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.376 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.376 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.376 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.377 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.377 21:48:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.636 [2024-09-30 21:48:31.224837] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:46.636 [2024-09-30 21:48:31.224982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72480 ] 00:06:46.636 [2024-09-30 21:48:31.354569] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.636 [2024-09-30 21:48:31.374984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.636 [2024-09-30 21:48:31.406583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72496 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72496 /var/tmp/spdk2.sock 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72496 ']' 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.265 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.527 [2024-09-30 21:48:32.124184] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:47.527 [2024-09-30 21:48:32.124318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72496 ] 00:06:47.527 [2024-09-30 21:48:32.254706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:47.527 [2024-09-30 21:48:32.272485] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:47.527 [2024-09-30 21:48:32.272544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.527 [2024-09-30 21:48:32.336219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.471 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.471 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:48.471 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72480 00:06:48.471 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72480 00:06:48.471 21:48:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.471 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72480 00:06:48.471 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72480 ']' 00:06:48.471 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72480 00:06:48.471 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.471 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.471 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72480 00:06:48.729 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.729 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.729 killing process with pid 72480 00:06:48.729 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72480' 00:06:48.729 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72480 00:06:48.729 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72480 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72496 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72496 ']' 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72496 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72496 00:06:48.990 killing process with pid 72496 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72496' 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72496 00:06:48.990 21:48:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72496 00:06:49.252 ************************************ 00:06:49.252 END TEST non_locking_app_on_locked_coremask 00:06:49.252 ************************************ 00:06:49.252 00:06:49.252 real 0m2.890s 00:06:49.252 user 0m3.145s 00:06:49.252 sys 0m0.799s 00:06:49.252 21:48:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.252 21:48:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.514 21:48:34 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:49.514 21:48:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.514 21:48:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.514 21:48:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.514 ************************************ 00:06:49.514 START TEST locking_app_on_unlocked_coremask 00:06:49.514 ************************************ 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72554 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72554 /var/tmp/spdk.sock 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72554 ']' 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.514 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.514 [2024-09-30 21:48:34.154228] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:49.514 [2024-09-30 21:48:34.154337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72554 ] 00:06:49.514 [2024-09-30 21:48:34.277424] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:49.514 [2024-09-30 21:48:34.295881] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.514 [2024-09-30 21:48:34.296098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.514 [2024-09-30 21:48:34.326078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.456 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.456 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.456 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72570 00:06:50.456 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72570 /var/tmp/spdk2.sock 00:06:50.456 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72570 ']' 00:06:50.456 21:48:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:50.456 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.456 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.456 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.456 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.456 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.456 [2024-09-30 21:48:35.066362] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:50.456 [2024-09-30 21:48:35.066662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72570 ] 00:06:50.456 [2024-09-30 21:48:35.195654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:50.456 [2024-09-30 21:48:35.213442] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.717 [2024-09-30 21:48:35.277008] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.290 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.290 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.290 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72570 00:06:51.290 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72570 00:06:51.290 21:48:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72554 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72554 ']' 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72554 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72554 00:06:51.551 killing process with pid 72554 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72554' 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72554 00:06:51.551 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72554 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72570 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72570 ']' 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 72570 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72570 00:06:52.125 killing process with pid 72570 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72570' 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 72570 00:06:52.125 21:48:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 72570 00:06:52.383 ************************************ 00:06:52.383 END TEST locking_app_on_unlocked_coremask 00:06:52.383 ************************************ 00:06:52.383 00:06:52.383 real 0m2.942s 00:06:52.383 user 0m3.264s 00:06:52.383 sys 0m0.779s 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.383 21:48:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:52.383 21:48:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:52.383 21:48:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.383 21:48:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.383 ************************************ 00:06:52.383 START TEST locking_app_on_locked_coremask 00:06:52.383 ************************************ 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72628 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72628 /var/tmp/spdk.sock 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72628 ']' 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.383 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.383 [2024-09-30 21:48:37.152282] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:52.383 [2024-09-30 21:48:37.152394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72628 ] 00:06:52.641 [2024-09-30 21:48:37.281992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.641 [2024-09-30 21:48:37.300562] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.641 [2024-09-30 21:48:37.333836] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72644 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72644 /var/tmp/spdk2.sock 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72644 /var/tmp/spdk2.sock 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72644 /var/tmp/spdk2.sock 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 72644 ']' 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.211 21:48:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.472 [2024-09-30 21:48:38.054560] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:53.472 [2024-09-30 21:48:38.054675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72644 ] 00:06:53.472 [2024-09-30 21:48:38.186718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:53.472 [2024-09-30 21:48:38.211308] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72628 has claimed it. 00:06:53.472 [2024-09-30 21:48:38.211358] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:54.043 ERROR: process (pid: 72644) is no longer running 00:06:54.043 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72644) - No such process 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72628 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72628 00:06:54.043 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72628 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 72628 ']' 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 72628 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72628 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.305 killing process with pid 72628 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72628' 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 72628 00:06:54.305 21:48:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 72628 00:06:54.566 00:06:54.566 real 0m2.133s 00:06:54.566 user 0m2.378s 00:06:54.566 sys 0m0.517s 00:06:54.566 21:48:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.566 ************************************ 00:06:54.566 END TEST locking_app_on_locked_coremask 00:06:54.566 ************************************ 00:06:54.566 21:48:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.566 21:48:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:54.566 21:48:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.566 21:48:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.566 21:48:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.566 ************************************ 00:06:54.566 START TEST locking_overlapped_coremask 00:06:54.566 ************************************ 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72686 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72686 /var/tmp/spdk.sock 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72686 ']' 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.566 21:48:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.566 [2024-09-30 21:48:39.353402] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:54.566 [2024-09-30 21:48:39.353529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72686 ] 00:06:54.826 [2024-09-30 21:48:39.484730] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:54.826 [2024-09-30 21:48:39.506039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.826 [2024-09-30 21:48:39.556597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.826 [2024-09-30 21:48:39.556937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.826 [2024-09-30 21:48:39.557012] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72704 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72704 /var/tmp/spdk2.sock 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72704 /var/tmp/spdk2.sock 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72704 /var/tmp/spdk2.sock 00:06:55.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 72704 ']' 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.419 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.681 [2024-09-30 21:48:40.297371] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:55.681 [2024-09-30 21:48:40.297925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72704 ] 00:06:55.681 [2024-09-30 21:48:40.433875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.681 [2024-09-30 21:48:40.457807] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72686 has claimed it. 00:06:55.681 [2024-09-30 21:48:40.457875] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:56.255 ERROR: process (pid: 72704) is no longer running 00:06:56.255 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (72704) - No such process 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72686 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 72686 ']' 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 72686 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72686 00:06:56.255 killing process with pid 72686 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72686' 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 72686 00:06:56.255 21:48:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 72686 00:06:56.517 00:06:56.517 real 0m1.968s 00:06:56.517 user 0m5.297s 00:06:56.517 sys 0m0.502s 00:06:56.517 ************************************ 00:06:56.517 END TEST locking_overlapped_coremask 00:06:56.517 ************************************ 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.517 21:48:41 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:56.517 21:48:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.517 21:48:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.517 21:48:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.517 ************************************ 00:06:56.517 START TEST locking_overlapped_coremask_via_rpc 00:06:56.517 ************************************ 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72746 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72746 /var/tmp/spdk.sock 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72746 ']' 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:56.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.517 21:48:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.777 [2024-09-30 21:48:41.415039] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:56.777 [2024-09-30 21:48:41.415247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72746 ] 00:06:56.777 [2024-09-30 21:48:41.561725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.777 [2024-09-30 21:48:41.574460] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.777 [2024-09-30 21:48:41.574497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.039 [2024-09-30 21:48:41.614184] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.039 [2024-09-30 21:48:41.614491] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.039 [2024-09-30 21:48:41.614581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72764 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72764 /var/tmp/spdk2.sock 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72764 ']' 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.612 21:48:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.612 [2024-09-30 21:48:42.414597] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:06:57.612 [2024-09-30 21:48:42.414757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72764 ] 00:06:57.874 [2024-09-30 21:48:42.551848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.874 [2024-09-30 21:48:42.575773] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:57.874 [2024-09-30 21:48:42.575834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:58.135 [2024-09-30 21:48:42.693833] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.135 [2024-09-30 21:48:42.693978] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.135 [2024-09-30 21:48:42.694036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.710 [2024-09-30 21:48:43.290400] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72746 has claimed it. 00:06:58.710 request: 00:06:58.710 { 00:06:58.710 "method": "framework_enable_cpumask_locks", 00:06:58.710 "req_id": 1 00:06:58.710 } 00:06:58.710 Got JSON-RPC error response 00:06:58.710 response: 00:06:58.710 { 00:06:58.710 "code": -32603, 00:06:58.710 "message": "Failed to claim CPU core: 2" 00:06:58.710 } 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72746 /var/tmp/spdk.sock 00:06:58.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72746 ']' 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.710 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72764 /var/tmp/spdk2.sock 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 72764 ']' 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.971 00:06:58.971 real 0m2.445s 00:06:58.971 user 0m1.223s 00:06:58.971 sys 0m0.154s 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.971 21:48:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.971 ************************************ 00:06:58.971 END TEST locking_overlapped_coremask_via_rpc 00:06:58.971 ************************************ 00:06:59.233 21:48:43 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:59.233 21:48:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72746 ]] 00:06:59.233 21:48:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72746 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72746 ']' 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72746 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72746 00:06:59.233 killing process with pid 72746 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72746' 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72746 00:06:59.233 21:48:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72746 00:06:59.494 21:48:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72764 ]] 00:06:59.494 21:48:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72764 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72764 ']' 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72764 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72764 00:06:59.494 killing process with pid 72764 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72764' 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 72764 00:06:59.494 21:48:44 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 72764 00:07:00.065 21:48:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.065 Process with pid 72746 is not found 00:07:00.065 Process with pid 72764 is not found 00:07:00.066 21:48:44 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:00.066 21:48:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72746 ]] 00:07:00.066 21:48:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72746 00:07:00.066 21:48:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72746 ']' 00:07:00.066 21:48:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72746 00:07:00.066 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72746) - No such process 00:07:00.066 21:48:44 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72746 is not found' 00:07:00.066 21:48:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72764 ]] 00:07:00.066 21:48:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72764 00:07:00.066 21:48:44 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 72764 ']' 00:07:00.066 21:48:44 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 72764 00:07:00.066 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72764) - No such process 00:07:00.066 21:48:44 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 72764 is not found' 00:07:00.066 21:48:44 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:00.066 ************************************ 00:07:00.066 END TEST cpu_locks 00:07:00.066 ************************************ 00:07:00.066 00:07:00.066 real 0m16.517s 00:07:00.066 user 0m29.681s 00:07:00.066 sys 0m4.523s 00:07:00.066 21:48:44 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.066 21:48:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.066 ************************************ 00:07:00.066 END TEST event 00:07:00.066 ************************************ 00:07:00.066 00:07:00.066 real 0m42.839s 00:07:00.066 user 1m24.033s 00:07:00.066 sys 0m7.403s 00:07:00.066 21:48:44 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.066 21:48:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.066 21:48:44 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:00.066 21:48:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.066 21:48:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.066 21:48:44 -- common/autotest_common.sh@10 -- # set +x 00:07:00.066 ************************************ 00:07:00.066 START TEST thread 00:07:00.066 ************************************ 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:00.066 * Looking for test storage... 00:07:00.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:00.066 21:48:44 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.066 21:48:44 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.066 21:48:44 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.066 21:48:44 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.066 21:48:44 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.066 21:48:44 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.066 21:48:44 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.066 21:48:44 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.066 21:48:44 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.066 21:48:44 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.066 21:48:44 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.066 21:48:44 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:00.066 21:48:44 thread -- scripts/common.sh@345 -- # : 1 00:07:00.066 21:48:44 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.066 21:48:44 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.066 21:48:44 thread -- scripts/common.sh@365 -- # decimal 1 00:07:00.066 21:48:44 thread -- scripts/common.sh@353 -- # local d=1 00:07:00.066 21:48:44 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.066 21:48:44 thread -- scripts/common.sh@355 -- # echo 1 00:07:00.066 21:48:44 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.066 21:48:44 thread -- scripts/common.sh@366 -- # decimal 2 00:07:00.066 21:48:44 thread -- scripts/common.sh@353 -- # local d=2 00:07:00.066 21:48:44 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.066 21:48:44 thread -- scripts/common.sh@355 -- # echo 2 00:07:00.066 21:48:44 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.066 21:48:44 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.066 21:48:44 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.066 21:48:44 thread -- scripts/common.sh@368 -- # return 0 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:00.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.066 --rc genhtml_branch_coverage=1 00:07:00.066 --rc genhtml_function_coverage=1 00:07:00.066 --rc genhtml_legend=1 00:07:00.066 --rc geninfo_all_blocks=1 00:07:00.066 --rc geninfo_unexecuted_blocks=1 00:07:00.066 00:07:00.066 ' 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:00.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.066 --rc genhtml_branch_coverage=1 00:07:00.066 --rc genhtml_function_coverage=1 00:07:00.066 --rc genhtml_legend=1 00:07:00.066 --rc geninfo_all_blocks=1 00:07:00.066 --rc geninfo_unexecuted_blocks=1 00:07:00.066 00:07:00.066 ' 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:00.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.066 --rc genhtml_branch_coverage=1 00:07:00.066 --rc genhtml_function_coverage=1 00:07:00.066 --rc genhtml_legend=1 00:07:00.066 --rc geninfo_all_blocks=1 00:07:00.066 --rc geninfo_unexecuted_blocks=1 00:07:00.066 00:07:00.066 ' 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:00.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.066 --rc genhtml_branch_coverage=1 00:07:00.066 --rc genhtml_function_coverage=1 00:07:00.066 --rc genhtml_legend=1 00:07:00.066 --rc geninfo_all_blocks=1 00:07:00.066 --rc geninfo_unexecuted_blocks=1 00:07:00.066 00:07:00.066 ' 00:07:00.066 21:48:44 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.066 21:48:44 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.066 ************************************ 00:07:00.066 START TEST thread_poller_perf 00:07:00.066 ************************************ 00:07:00.066 21:48:44 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:00.327 [2024-09-30 21:48:44.897754] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:00.327 [2024-09-30 21:48:44.898502] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72891 ] 00:07:00.327 [2024-09-30 21:48:45.030754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.327 [2024-09-30 21:48:45.049273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.327 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.327 [2024-09-30 21:48:45.107534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.715 ====================================== 00:07:01.715 busy:2612350300 (cyc) 00:07:01.715 total_run_count: 305000 00:07:01.715 tsc_hz: 2600000000 (cyc) 00:07:01.715 ====================================== 00:07:01.715 poller_cost: 8565 (cyc), 3294 (nsec) 00:07:01.715 00:07:01.715 real 0m1.340s 00:07:01.715 user 0m1.146s 00:07:01.715 sys 0m0.084s 00:07:01.715 21:48:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.715 21:48:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 ************************************ 00:07:01.715 END TEST thread_poller_perf 00:07:01.715 ************************************ 00:07:01.715 21:48:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.715 21:48:46 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:01.715 21:48:46 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.715 21:48:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.715 ************************************ 00:07:01.715 START TEST thread_poller_perf 00:07:01.715 ************************************ 00:07:01.715 21:48:46 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:01.715 [2024-09-30 21:48:46.304926] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:01.715 [2024-09-30 21:48:46.305450] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72933 ] 00:07:01.715 [2024-09-30 21:48:46.437380] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:01.715 [2024-09-30 21:48:46.458273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.715 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.715 [2024-09-30 21:48:46.511227] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.107 ====================================== 00:07:03.107 busy:2603678510 (cyc) 00:07:03.107 total_run_count: 3949000 00:07:03.107 tsc_hz: 2600000000 (cyc) 00:07:03.107 ====================================== 00:07:03.107 poller_cost: 659 (cyc), 253 (nsec) 00:07:03.107 00:07:03.107 real 0m1.329s 00:07:03.107 user 0m1.122s 00:07:03.107 sys 0m0.097s 00:07:03.107 21:48:47 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.107 ************************************ 00:07:03.107 END TEST thread_poller_perf 00:07:03.107 ************************************ 00:07:03.107 21:48:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.107 21:48:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:03.107 ************************************ 00:07:03.107 END TEST thread 00:07:03.107 ************************************ 00:07:03.107 00:07:03.107 real 0m2.953s 00:07:03.107 user 0m2.383s 00:07:03.107 sys 0m0.303s 00:07:03.107 21:48:47 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.107 21:48:47 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.107 21:48:47 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:03.107 21:48:47 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:03.107 21:48:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.107 21:48:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.107 21:48:47 -- common/autotest_common.sh@10 -- # set +x 00:07:03.107 ************************************ 00:07:03.107 START TEST app_cmdline 00:07:03.107 ************************************ 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:03.107 * Looking for test storage... 00:07:03.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:03.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.107 21:48:47 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.107 --rc genhtml_branch_coverage=1 00:07:03.107 --rc genhtml_function_coverage=1 00:07:03.107 --rc genhtml_legend=1 00:07:03.107 --rc geninfo_all_blocks=1 00:07:03.107 --rc geninfo_unexecuted_blocks=1 00:07:03.107 00:07:03.107 ' 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.107 --rc genhtml_branch_coverage=1 00:07:03.107 --rc genhtml_function_coverage=1 00:07:03.107 --rc genhtml_legend=1 00:07:03.107 --rc geninfo_all_blocks=1 00:07:03.107 --rc geninfo_unexecuted_blocks=1 00:07:03.107 00:07:03.107 ' 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.107 --rc genhtml_branch_coverage=1 00:07:03.107 --rc genhtml_function_coverage=1 00:07:03.107 --rc genhtml_legend=1 00:07:03.107 --rc geninfo_all_blocks=1 00:07:03.107 --rc geninfo_unexecuted_blocks=1 00:07:03.107 00:07:03.107 ' 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.107 --rc genhtml_branch_coverage=1 00:07:03.107 --rc genhtml_function_coverage=1 00:07:03.107 --rc genhtml_legend=1 00:07:03.107 --rc geninfo_all_blocks=1 00:07:03.107 --rc geninfo_unexecuted_blocks=1 00:07:03.107 00:07:03.107 ' 00:07:03.107 21:48:47 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:03.107 21:48:47 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=73011 00:07:03.107 21:48:47 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 73011 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 73011 ']' 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.107 21:48:47 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:03.107 21:48:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.368 [2024-09-30 21:48:47.950225] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:03.368 [2024-09-30 21:48:47.950629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73011 ] 00:07:03.368 [2024-09-30 21:48:48.084314] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.368 [2024-09-30 21:48:48.100111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.368 [2024-09-30 21:48:48.162834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.309 21:48:48 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.309 21:48:48 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:04.309 21:48:48 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:04.309 { 00:07:04.309 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:07:04.309 "fields": { 00:07:04.309 "major": 25, 00:07:04.309 "minor": 1, 00:07:04.309 "patch": 0, 00:07:04.309 "suffix": "-pre", 00:07:04.309 "commit": "09cc66129" 00:07:04.309 } 00:07:04.309 } 00:07:04.309 21:48:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:04.309 21:48:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:04.309 21:48:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:04.309 21:48:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:04.309 21:48:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:04.309 21:48:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:04.309 21:48:48 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.309 21:48:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.309 21:48:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:04.309 21:48:48 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.309 21:48:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:04.309 21:48:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:04.309 21:48:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:04.309 21:48:49 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:04.569 request: 00:07:04.569 { 00:07:04.569 "method": "env_dpdk_get_mem_stats", 00:07:04.569 "req_id": 1 00:07:04.569 } 00:07:04.569 Got JSON-RPC error response 00:07:04.569 response: 00:07:04.569 { 00:07:04.569 "code": -32601, 00:07:04.569 "message": "Method not found" 00:07:04.569 } 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.569 21:48:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 73011 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 73011 ']' 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 73011 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73011 00:07:04.569 killing process with pid 73011 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73011' 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@969 -- # kill 73011 00:07:04.569 21:48:49 app_cmdline -- common/autotest_common.sh@974 -- # wait 73011 00:07:04.829 ************************************ 00:07:04.829 END TEST app_cmdline 00:07:04.829 ************************************ 00:07:04.829 00:07:04.829 real 0m1.784s 00:07:04.829 user 0m2.034s 00:07:04.829 sys 0m0.484s 00:07:04.829 21:48:49 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:04.829 21:48:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:04.829 21:48:49 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:04.829 21:48:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:04.829 21:48:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:04.829 21:48:49 -- common/autotest_common.sh@10 -- # set +x 00:07:04.829 ************************************ 00:07:04.829 START TEST version 00:07:04.829 ************************************ 00:07:04.829 21:48:49 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:04.829 * Looking for test storage... 00:07:04.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:04.829 21:48:49 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:04.829 21:48:49 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:04.829 21:48:49 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.089 21:48:49 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.089 21:48:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.089 21:48:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.089 21:48:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.089 21:48:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.089 21:48:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.089 21:48:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.089 21:48:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.089 21:48:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.089 21:48:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.089 21:48:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.089 21:48:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.089 21:48:49 version -- scripts/common.sh@344 -- # case "$op" in 00:07:05.089 21:48:49 version -- scripts/common.sh@345 -- # : 1 00:07:05.089 21:48:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.089 21:48:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.089 21:48:49 version -- scripts/common.sh@365 -- # decimal 1 00:07:05.089 21:48:49 version -- scripts/common.sh@353 -- # local d=1 00:07:05.089 21:48:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.089 21:48:49 version -- scripts/common.sh@355 -- # echo 1 00:07:05.089 21:48:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.089 21:48:49 version -- scripts/common.sh@366 -- # decimal 2 00:07:05.089 21:48:49 version -- scripts/common.sh@353 -- # local d=2 00:07:05.089 21:48:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.089 21:48:49 version -- scripts/common.sh@355 -- # echo 2 00:07:05.089 21:48:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.089 21:48:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.089 21:48:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.089 21:48:49 version -- scripts/common.sh@368 -- # return 0 00:07:05.089 21:48:49 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.089 21:48:49 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.089 --rc genhtml_branch_coverage=1 00:07:05.089 --rc genhtml_function_coverage=1 00:07:05.089 --rc genhtml_legend=1 00:07:05.089 --rc geninfo_all_blocks=1 00:07:05.089 --rc geninfo_unexecuted_blocks=1 00:07:05.089 00:07:05.089 ' 00:07:05.089 21:48:49 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.089 --rc genhtml_branch_coverage=1 00:07:05.089 --rc genhtml_function_coverage=1 00:07:05.089 --rc genhtml_legend=1 00:07:05.089 --rc geninfo_all_blocks=1 00:07:05.089 --rc geninfo_unexecuted_blocks=1 00:07:05.089 00:07:05.089 ' 00:07:05.089 21:48:49 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.089 --rc genhtml_branch_coverage=1 00:07:05.089 --rc genhtml_function_coverage=1 00:07:05.089 --rc genhtml_legend=1 00:07:05.089 --rc geninfo_all_blocks=1 00:07:05.089 --rc geninfo_unexecuted_blocks=1 00:07:05.089 00:07:05.089 ' 00:07:05.089 21:48:49 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.089 --rc genhtml_branch_coverage=1 00:07:05.089 --rc genhtml_function_coverage=1 00:07:05.089 --rc genhtml_legend=1 00:07:05.089 --rc geninfo_all_blocks=1 00:07:05.089 --rc geninfo_unexecuted_blocks=1 00:07:05.089 00:07:05.089 ' 00:07:05.089 21:48:49 version -- app/version.sh@17 -- # get_header_version major 00:07:05.089 21:48:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.089 21:48:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.089 21:48:49 version -- app/version.sh@14 -- # cut -f2 00:07:05.089 21:48:49 version -- app/version.sh@17 -- # major=25 00:07:05.089 21:48:49 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.089 21:48:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.089 21:48:49 version -- app/version.sh@14 -- # cut -f2 00:07:05.089 21:48:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.089 21:48:49 version -- app/version.sh@18 -- # minor=1 00:07:05.089 21:48:49 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.089 21:48:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.089 21:48:49 version -- app/version.sh@14 -- # cut -f2 00:07:05.089 21:48:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.089 21:48:49 version -- app/version.sh@19 -- # patch=0 00:07:05.089 21:48:49 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.089 21:48:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:05.089 21:48:49 version -- app/version.sh@14 -- # cut -f2 00:07:05.089 21:48:49 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.089 21:48:49 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.089 21:48:49 version -- app/version.sh@22 -- # version=25.1 00:07:05.089 21:48:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.089 21:48:49 version -- app/version.sh@28 -- # version=25.1rc0 00:07:05.089 21:48:49 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:05.089 21:48:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.089 21:48:49 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:05.089 21:48:49 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:05.089 ************************************ 00:07:05.089 END TEST version 00:07:05.089 ************************************ 00:07:05.089 00:07:05.089 real 0m0.191s 00:07:05.089 user 0m0.118s 00:07:05.089 sys 0m0.100s 00:07:05.089 21:48:49 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.089 21:48:49 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.089 21:48:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:05.089 21:48:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:05.089 21:48:49 -- spdk/autotest.sh@194 -- # uname -s 00:07:05.089 21:48:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:05.089 21:48:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.089 21:48:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.089 21:48:49 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:05.089 21:48:49 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:05.089 21:48:49 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:05.089 21:48:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:05.089 21:48:49 -- common/autotest_common.sh@10 -- # set +x 00:07:05.089 ************************************ 00:07:05.089 START TEST blockdev_nvme 00:07:05.089 ************************************ 00:07:05.089 21:48:49 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:05.089 * Looking for test storage... 00:07:05.089 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:05.089 21:48:49 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:05.089 21:48:49 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:07:05.089 21:48:49 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:05.089 21:48:49 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.089 21:48:49 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.090 21:48:49 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.090 21:48:49 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.090 21:48:49 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:05.090 21:48:49 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:05.090 21:48:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.090 21:48:49 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.090 21:48:49 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.349 21:48:49 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:05.349 21:48:49 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.349 21:48:49 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:05.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.349 --rc genhtml_branch_coverage=1 00:07:05.349 --rc genhtml_function_coverage=1 00:07:05.349 --rc genhtml_legend=1 00:07:05.349 --rc geninfo_all_blocks=1 00:07:05.349 --rc geninfo_unexecuted_blocks=1 00:07:05.349 00:07:05.349 ' 00:07:05.349 21:48:49 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:05.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.349 --rc genhtml_branch_coverage=1 00:07:05.349 --rc genhtml_function_coverage=1 00:07:05.349 --rc genhtml_legend=1 00:07:05.349 --rc geninfo_all_blocks=1 00:07:05.349 --rc geninfo_unexecuted_blocks=1 00:07:05.349 00:07:05.349 ' 00:07:05.349 21:48:49 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:05.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.349 --rc genhtml_branch_coverage=1 00:07:05.349 --rc genhtml_function_coverage=1 00:07:05.349 --rc genhtml_legend=1 00:07:05.349 --rc geninfo_all_blocks=1 00:07:05.349 --rc geninfo_unexecuted_blocks=1 00:07:05.349 00:07:05.349 ' 00:07:05.349 21:48:49 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:05.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.349 --rc genhtml_branch_coverage=1 00:07:05.349 --rc genhtml_function_coverage=1 00:07:05.349 --rc genhtml_legend=1 00:07:05.349 --rc geninfo_all_blocks=1 00:07:05.349 --rc geninfo_unexecuted_blocks=1 00:07:05.349 00:07:05.349 ' 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:05.349 21:48:49 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:05.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:05.349 21:48:49 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:05.350 21:48:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:05.350 21:48:49 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:05.350 21:48:49 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:05.350 21:48:49 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73172 00:07:05.350 21:48:49 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:05.350 21:48:49 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 73172 00:07:05.350 21:48:49 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 73172 ']' 00:07:05.350 21:48:49 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.350 21:48:49 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.350 21:48:49 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:05.350 21:48:49 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.350 21:48:49 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.350 21:48:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.350 [2024-09-30 21:48:49.989751] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:05.350 [2024-09-30 21:48:49.989864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73172 ] 00:07:05.350 [2024-09-30 21:48:50.118372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.350 [2024-09-30 21:48:50.139692] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.609 [2024-09-30 21:48:50.172382] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.180 21:48:50 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.180 21:48:50 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:07:06.180 21:48:50 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:06.180 21:48:50 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:06.180 21:48:50 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:06.180 21:48:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:06.180 21:48:50 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:06.180 21:48:50 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:06.180 21:48:50 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.180 21:48:50 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.440 21:48:51 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.440 21:48:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:06.440 21:48:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.440 21:48:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.440 21:48:51 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.440 21:48:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.440 21:48:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:06.441 21:48:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:06.441 21:48:51 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:06.441 21:48:51 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:06.441 21:48:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.441 21:48:51 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:06.441 21:48:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:06.441 21:48:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:06.703 21:48:51 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "2f0ecd23-0a33-473b-aa7e-5b23968e163d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2f0ecd23-0a33-473b-aa7e-5b23968e163d",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6bd20267-9caf-44a3-b387-6ed986a2d97e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6bd20267-9caf-44a3-b387-6ed986a2d97e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "ca497e41-5525-4b2b-85df-08e817104664"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ca497e41-5525-4b2b-85df-08e817104664",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "28d7e534-641a-4ac4-861e-862282314c0e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "28d7e534-641a-4ac4-861e-862282314c0e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ba0cd025-f38e-4c22-91a6-bcdb69b70cd6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ba0cd025-f38e-4c22-91a6-bcdb69b70cd6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d8ad8e28-ad7e-43ca-be58-4a76cf762c70"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d8ad8e28-ad7e-43ca-be58-4a76cf762c70",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:06.703 21:48:51 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:06.703 21:48:51 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:06.703 21:48:51 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:06.703 21:48:51 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 73172 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 73172 ']' 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 73172 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73172 00:07:06.703 killing process with pid 73172 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73172' 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 73172 00:07:06.703 21:48:51 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 73172 00:07:06.965 21:48:51 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:06.965 21:48:51 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:06.965 21:48:51 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:06.965 21:48:51 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.965 21:48:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:06.965 ************************************ 00:07:06.965 START TEST bdev_hello_world 00:07:06.965 ************************************ 00:07:06.965 21:48:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:06.965 [2024-09-30 21:48:51.653836] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:06.965 [2024-09-30 21:48:51.653956] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73245 ] 00:07:07.227 [2024-09-30 21:48:51.781145] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:07.227 [2024-09-30 21:48:51.801925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.227 [2024-09-30 21:48:51.834427] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.488 [2024-09-30 21:48:52.207553] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:07.489 [2024-09-30 21:48:52.207596] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:07.489 [2024-09-30 21:48:52.207620] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:07.489 [2024-09-30 21:48:52.209677] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:07.489 [2024-09-30 21:48:52.210510] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:07.489 [2024-09-30 21:48:52.210637] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:07.489 [2024-09-30 21:48:52.210982] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:07.489 00:07:07.489 [2024-09-30 21:48:52.211007] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:07.750 00:07:07.750 real 0m0.781s 00:07:07.750 user 0m0.515s 00:07:07.750 sys 0m0.161s 00:07:07.750 ************************************ 00:07:07.750 END TEST bdev_hello_world 00:07:07.750 ************************************ 00:07:07.750 21:48:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.750 21:48:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:07.750 21:48:52 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:07.750 21:48:52 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.750 21:48:52 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.750 21:48:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.750 ************************************ 00:07:07.750 START TEST bdev_bounds 00:07:07.750 ************************************ 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:07.750 Process bdevio pid: 73276 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73276 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73276' 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73276 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 73276 ']' 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.750 21:48:52 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:07.750 [2024-09-30 21:48:52.479746] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:07.750 [2024-09-30 21:48:52.479882] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73276 ] 00:07:08.011 [2024-09-30 21:48:52.611133] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.011 [2024-09-30 21:48:52.628538] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.011 [2024-09-30 21:48:52.668386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.011 [2024-09-30 21:48:52.668577] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.011 [2024-09-30 21:48:52.668634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.583 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.583 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:08.583 21:48:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:08.844 I/O targets: 00:07:08.844 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:08.844 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:08.844 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.844 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.844 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:08.844 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:08.844 00:07:08.844 00:07:08.844 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.844 http://cunit.sourceforge.net/ 00:07:08.844 00:07:08.844 00:07:08.844 Suite: bdevio tests on: Nvme3n1 00:07:08.844 Test: blockdev write read block ...passed 00:07:08.844 Test: blockdev write zeroes read block ...passed 00:07:08.844 Test: blockdev write zeroes read no split ...passed 00:07:08.844 Test: blockdev write zeroes read split ...passed 00:07:08.844 Test: blockdev write zeroes read split partial ...passed 00:07:08.844 Test: blockdev reset ...[2024-09-30 21:48:53.442488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:08.844 passed 00:07:08.844 Test: blockdev write read 8 blocks ...[2024-09-30 21:48:53.444591] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.844 passed 00:07:08.844 Test: blockdev write read size > 128k ...passed 00:07:08.844 Test: blockdev write read invalid size ...passed 00:07:08.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.844 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.844 Test: blockdev write read max offset ...passed 00:07:08.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.844 Test: blockdev writev readv 8 blocks ...passed 00:07:08.844 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.844 Test: blockdev writev readv block ...passed 00:07:08.844 Test: blockdev writev readv size > 128k ...passed 00:07:08.844 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.844 Test: blockdev comparev and writev ...[2024-09-30 21:48:53.449804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c0006000 len:0x1000 00:07:08.844 [2024-09-30 21:48:53.449852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.844 passed 00:07:08.844 Test: blockdev nvme passthru rw ...passed 00:07:08.844 Test: blockdev nvme passthru vendor specific ...[2024-09-30 21:48:53.450449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:07:08.844 Test: blockdev nvme admin passthru ...RP2 0x0 00:07:08.844 [2024-09-30 21:48:53.450517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.844 passed 00:07:08.844 Test: blockdev copy ...passed 00:07:08.844 Suite: bdevio tests on: Nvme2n3 00:07:08.844 Test: blockdev write read block ...passed 00:07:08.844 Test: blockdev write zeroes read block ...passed 00:07:08.844 Test: blockdev write zeroes read no split ...passed 00:07:08.844 Test: blockdev write zeroes read split ...passed 00:07:08.844 Test: blockdev write zeroes read split partial ...passed 00:07:08.844 Test: blockdev reset ...[2024-09-30 21:48:53.462807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:08.844 passed 00:07:08.844 Test: blockdev write read 8 blocks ...[2024-09-30 21:48:53.464439] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.844 passed 00:07:08.844 Test: blockdev write read size > 128k ...passed 00:07:08.844 Test: blockdev write read invalid size ...passed 00:07:08.844 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.845 Test: blockdev write read max offset ...passed 00:07:08.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.845 Test: blockdev writev readv 8 blocks ...passed 00:07:08.845 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.845 Test: blockdev writev readv block ...passed 00:07:08.845 Test: blockdev writev readv size > 128k ...passed 00:07:08.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.845 Test: blockdev comparev and writev ...[2024-09-30 21:48:53.468157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2c05000 len:0x1000 00:07:08.845 [2024-09-30 21:48:53.468212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.845 passed 00:07:08.845 Test: blockdev nvme passthru rw ...passed 00:07:08.845 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.845 Test: blockdev nvme admin passthru ...[2024-09-30 21:48:53.468747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.845 [2024-09-30 21:48:53.468776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.845 passed 00:07:08.845 Test: blockdev copy ...passed 00:07:08.845 Suite: bdevio tests on: Nvme2n2 00:07:08.845 Test: blockdev write read block ...passed 00:07:08.845 Test: blockdev write zeroes read block ...passed 00:07:08.845 Test: blockdev write zeroes read no split ...passed 00:07:08.845 Test: blockdev write zeroes read split ...passed 00:07:08.845 Test: blockdev write zeroes read split partial ...passed 00:07:08.845 Test: blockdev reset ...[2024-09-30 21:48:53.484450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:08.845 passed 00:07:08.845 Test: blockdev write read 8 blocks ...[2024-09-30 21:48:53.486102] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.845 passed 00:07:08.845 Test: blockdev write read size > 128k ...passed 00:07:08.845 Test: blockdev write read invalid size ...passed 00:07:08.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.845 Test: blockdev write read max offset ...passed 00:07:08.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.845 Test: blockdev writev readv 8 blocks ...passed 00:07:08.845 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.845 Test: blockdev writev readv block ...passed 00:07:08.845 Test: blockdev writev readv size > 128k ...passed 00:07:08.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.845 Test: blockdev comparev and writev ...[2024-09-30 21:48:53.489884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:07:08.845 Test: blockdev nvme passthru rw ...passed 00:07:08.845 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.845 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2d3036000 len:0x1000 00:07:08.845 [2024-09-30 21:48:53.490007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.845 [2024-09-30 21:48:53.490454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.845 [2024-09-30 21:48:53.490483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.845 passed 00:07:08.845 Test: blockdev copy ...passed 00:07:08.845 Suite: bdevio tests on: Nvme2n1 00:07:08.845 Test: blockdev write read block ...passed 00:07:08.845 Test: blockdev write zeroes read block ...passed 00:07:08.845 Test: blockdev write zeroes read no split ...passed 00:07:08.845 Test: blockdev write zeroes read split ...passed 00:07:08.845 Test: blockdev write zeroes read split partial ...passed 00:07:08.845 Test: blockdev reset ...[2024-09-30 21:48:53.505376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:08.845 passed 00:07:08.845 Test: blockdev write read 8 blocks ...[2024-09-30 21:48:53.506925] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.845 passed 00:07:08.845 Test: blockdev write read size > 128k ...passed 00:07:08.845 Test: blockdev write read invalid size ...passed 00:07:08.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.845 Test: blockdev write read max offset ...passed 00:07:08.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.845 Test: blockdev writev readv 8 blocks ...passed 00:07:08.845 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.845 Test: blockdev writev readv block ...passed 00:07:08.845 Test: blockdev writev readv size > 128k ...passed 00:07:08.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.845 Test: blockdev comparev and writev ...[2024-09-30 21:48:53.510830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d3030000 len:0x1000 00:07:08.845 [2024-09-30 21:48:53.510871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.845 passed 00:07:08.845 Test: blockdev nvme passthru rw ...passed 00:07:08.845 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.845 Test: blockdev nvme admin passthru ...[2024-09-30 21:48:53.511366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.845 [2024-09-30 21:48:53.511396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.845 passed 00:07:08.845 Test: blockdev copy ...passed 00:07:08.845 Suite: bdevio tests on: Nvme1n1 00:07:08.845 Test: blockdev write read block ...passed 00:07:08.845 Test: blockdev write zeroes read block ...passed 00:07:08.845 Test: blockdev write zeroes read no split ...passed 00:07:08.845 Test: blockdev write zeroes read split ...passed 00:07:08.845 Test: blockdev write zeroes read split partial ...passed 00:07:08.845 Test: blockdev reset ...[2024-09-30 21:48:53.525734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:08.845 passed 00:07:08.845 Test: blockdev write read 8 blocks ...[2024-09-30 21:48:53.526967] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.845 passed 00:07:08.845 Test: blockdev write read size > 128k ...passed 00:07:08.845 Test: blockdev write read invalid size ...passed 00:07:08.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.845 Test: blockdev write read max offset ...passed 00:07:08.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.845 Test: blockdev writev readv 8 blocks ...passed 00:07:08.845 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.845 Test: blockdev writev readv block ...passed 00:07:08.845 Test: blockdev writev readv size > 128k ...passed 00:07:08.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.845 Test: blockdev comparev and writev ...[2024-09-30 21:48:53.530552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d302c000 len:0x1000 00:07:08.845 [2024-09-30 21:48:53.530588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:08.845 passed 00:07:08.845 Test: blockdev nvme passthru rw ...passed 00:07:08.845 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.845 Test: blockdev nvme admin passthru ...[2024-09-30 21:48:53.531071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:08.845 [2024-09-30 21:48:53.531098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:08.845 passed 00:07:08.845 Test: blockdev copy ...passed 00:07:08.845 Suite: bdevio tests on: Nvme0n1 00:07:08.845 Test: blockdev write read block ...passed 00:07:08.845 Test: blockdev write zeroes read block ...passed 00:07:08.845 Test: blockdev write zeroes read no split ...passed 00:07:08.845 Test: blockdev write zeroes read split ...passed 00:07:08.845 Test: blockdev write zeroes read split partial ...passed 00:07:08.845 Test: blockdev reset ...[2024-09-30 21:48:53.549688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:08.845 passed 00:07:08.845 Test: blockdev write read 8 blocks ...[2024-09-30 21:48:53.551184] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:08.845 passed 00:07:08.845 Test: blockdev write read size > 128k ...passed 00:07:08.845 Test: blockdev write read invalid size ...passed 00:07:08.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:08.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:08.845 Test: blockdev write read max offset ...passed 00:07:08.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:08.845 Test: blockdev writev readv 8 blocks ...passed 00:07:08.845 Test: blockdev writev readv 30 x 1block ...passed 00:07:08.845 Test: blockdev writev readv block ...passed 00:07:08.845 Test: blockdev writev readv size > 128k ...passed 00:07:08.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:08.845 Test: blockdev comparev and writev ...passed 00:07:08.845 Test: blockdev nvme passthru rw ...[2024-09-30 21:48:53.554975] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:08.845 separate metadata which is not supported yet. 00:07:08.845 passed 00:07:08.845 Test: blockdev nvme passthru vendor specific ...passed 00:07:08.845 Test: blockdev nvme admin passthru ...[2024-09-30 21:48:53.555453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:08.845 [2024-09-30 21:48:53.555492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:08.845 passed 00:07:08.845 Test: blockdev copy ...passed 00:07:08.845 00:07:08.846 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.846 suites 6 6 n/a 0 0 00:07:08.846 tests 138 138 138 0 0 00:07:08.846 asserts 893 893 893 0 n/a 00:07:08.846 00:07:08.846 Elapsed time = 0.329 seconds 00:07:08.846 0 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73276 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 73276 ']' 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 73276 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73276 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73276' 00:07:08.846 killing process with pid 73276 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 73276 00:07:08.846 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 73276 00:07:09.105 21:48:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:09.105 00:07:09.105 real 0m1.319s 00:07:09.105 user 0m3.340s 00:07:09.105 sys 0m0.262s 00:07:09.105 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.105 21:48:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:09.105 ************************************ 00:07:09.105 END TEST bdev_bounds 00:07:09.105 ************************************ 00:07:09.105 21:48:53 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.105 21:48:53 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:09.105 21:48:53 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.105 21:48:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.105 ************************************ 00:07:09.105 START TEST bdev_nbd 00:07:09.105 ************************************ 00:07:09.105 21:48:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:09.105 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:09.105 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:09.105 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73319 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73319 /var/tmp/spdk-nbd.sock 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 73319 ']' 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:09.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.106 21:48:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.106 [2024-09-30 21:48:53.877988] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:09.106 [2024-09-30 21:48:53.878368] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.366 [2024-09-30 21:48:54.010149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:09.367 [2024-09-30 21:48:54.029885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.367 [2024-09-30 21:48:54.061153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:09.939 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.201 1+0 records in 00:07:10.201 1+0 records out 00:07:10.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703998 s, 5.8 MB/s 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.201 21:48:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:10.462 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.463 1+0 records in 00:07:10.463 1+0 records out 00:07:10.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358486 s, 11.4 MB/s 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.463 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.724 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.724 1+0 records in 00:07:10.725 1+0 records out 00:07:10.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000696912 s, 5.9 MB/s 00:07:10.725 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.725 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.725 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.725 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.725 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.725 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.725 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.725 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:10.986 1+0 records in 00:07:10.986 1+0 records out 00:07:10.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471316 s, 8.7 MB/s 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:10.986 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.248 1+0 records in 00:07:11.248 1+0 records out 00:07:11.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814586 s, 5.0 MB/s 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.248 21:48:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:11.509 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:11.510 1+0 records in 00:07:11.510 1+0 records out 00:07:11.510 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107111 s, 3.8 MB/s 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:11.510 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd0", 00:07:11.772 "bdev_name": "Nvme0n1" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd1", 00:07:11.772 "bdev_name": "Nvme1n1" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd2", 00:07:11.772 "bdev_name": "Nvme2n1" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd3", 00:07:11.772 "bdev_name": "Nvme2n2" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd4", 00:07:11.772 "bdev_name": "Nvme2n3" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd5", 00:07:11.772 "bdev_name": "Nvme3n1" 00:07:11.772 } 00:07:11.772 ]' 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd0", 00:07:11.772 "bdev_name": "Nvme0n1" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd1", 00:07:11.772 "bdev_name": "Nvme1n1" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd2", 00:07:11.772 "bdev_name": "Nvme2n1" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd3", 00:07:11.772 "bdev_name": "Nvme2n2" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd4", 00:07:11.772 "bdev_name": "Nvme2n3" 00:07:11.772 }, 00:07:11.772 { 00:07:11.772 "nbd_device": "/dev/nbd5", 00:07:11.772 "bdev_name": "Nvme3n1" 00:07:11.772 } 00:07:11.772 ]' 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.772 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.033 21:48:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:12.294 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:12.294 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:12.294 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:12.294 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.294 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.294 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:12.295 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.295 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.295 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.295 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.558 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:12.819 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:12.819 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:12.819 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:12.819 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.819 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.819 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:12.819 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.819 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.820 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.820 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.081 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.342 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.343 21:48:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:13.604 /dev/nbd0 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.604 1+0 records in 00:07:13.604 1+0 records out 00:07:13.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131755 s, 3.1 MB/s 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.604 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:13.865 /dev/nbd1 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.865 1+0 records in 00:07:13.865 1+0 records out 00:07:13.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00144074 s, 2.8 MB/s 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.865 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.866 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:13.866 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.866 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:13.866 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:14.127 /dev/nbd10 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.127 1+0 records in 00:07:14.127 1+0 records out 00:07:14.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011116 s, 3.7 MB/s 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.127 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.128 21:48:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:14.451 /dev/nbd11 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.451 1+0 records in 00:07:14.451 1+0 records out 00:07:14.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000689168 s, 5.9 MB/s 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.451 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:14.711 /dev/nbd12 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.711 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.712 1+0 records in 00:07:14.712 1+0 records out 00:07:14.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639703 s, 6.4 MB/s 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:14.712 /dev/nbd13 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:14.712 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.973 1+0 records in 00:07:14.973 1+0 records out 00:07:14.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631867 s, 6.5 MB/s 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd0", 00:07:14.973 "bdev_name": "Nvme0n1" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd1", 00:07:14.973 "bdev_name": "Nvme1n1" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd10", 00:07:14.973 "bdev_name": "Nvme2n1" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd11", 00:07:14.973 "bdev_name": "Nvme2n2" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd12", 00:07:14.973 "bdev_name": "Nvme2n3" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd13", 00:07:14.973 "bdev_name": "Nvme3n1" 00:07:14.973 } 00:07:14.973 ]' 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd0", 00:07:14.973 "bdev_name": "Nvme0n1" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd1", 00:07:14.973 "bdev_name": "Nvme1n1" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd10", 00:07:14.973 "bdev_name": "Nvme2n1" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd11", 00:07:14.973 "bdev_name": "Nvme2n2" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd12", 00:07:14.973 "bdev_name": "Nvme2n3" 00:07:14.973 }, 00:07:14.973 { 00:07:14.973 "nbd_device": "/dev/nbd13", 00:07:14.973 "bdev_name": "Nvme3n1" 00:07:14.973 } 00:07:14.973 ]' 00:07:14.973 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:15.235 /dev/nbd1 00:07:15.235 /dev/nbd10 00:07:15.235 /dev/nbd11 00:07:15.235 /dev/nbd12 00:07:15.235 /dev/nbd13' 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:15.235 /dev/nbd1 00:07:15.235 /dev/nbd10 00:07:15.235 /dev/nbd11 00:07:15.235 /dev/nbd12 00:07:15.235 /dev/nbd13' 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:15.235 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:15.235 256+0 records in 00:07:15.235 256+0 records out 00:07:15.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00630441 s, 166 MB/s 00:07:15.236 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.236 21:48:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:15.498 256+0 records in 00:07:15.498 256+0 records out 00:07:15.498 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.265975 s, 3.9 MB/s 00:07:15.498 21:49:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.498 21:49:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:15.758 256+0 records in 00:07:15.758 256+0 records out 00:07:15.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239189 s, 4.4 MB/s 00:07:15.758 21:49:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.758 21:49:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:15.758 256+0 records in 00:07:15.758 256+0 records out 00:07:15.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219184 s, 4.8 MB/s 00:07:15.758 21:49:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:15.758 21:49:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:16.020 256+0 records in 00:07:16.020 256+0 records out 00:07:16.020 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259467 s, 4.0 MB/s 00:07:16.020 21:49:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.020 21:49:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:16.282 256+0 records in 00:07:16.282 256+0 records out 00:07:16.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238783 s, 4.4 MB/s 00:07:16.282 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:16.282 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:16.543 256+0 records in 00:07:16.543 256+0 records out 00:07:16.543 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214574 s, 4.9 MB/s 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.543 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.896 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.168 21:49:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.430 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.691 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:17.954 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.215 21:49:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:18.476 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:18.738 malloc_lvol_verify 00:07:18.738 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:18.999 fa99dc80-589a-4863-9851-c604787ba02d 00:07:18.999 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:18.999 63f5f1d3-4d05-484d-af0b-03485d86a748 00:07:18.999 21:49:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:19.259 /dev/nbd0 00:07:19.259 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:19.259 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:19.259 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:19.259 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:19.259 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:19.259 mke2fs 1.47.0 (5-Feb-2023) 00:07:19.259 Discarding device blocks: 0/4096 done 00:07:19.259 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:19.259 00:07:19.259 Allocating group tables: 0/1 done 00:07:19.259 Writing inode tables: 0/1 done 00:07:19.259 Creating journal (1024 blocks): done 00:07:19.259 Writing superblocks and filesystem accounting information: 0/1 done 00:07:19.259 00:07:19.260 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:19.260 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.260 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:19.260 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.260 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:19.260 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.260 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73319 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 73319 ']' 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 73319 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73319 00:07:19.521 killing process with pid 73319 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73319' 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 73319 00:07:19.521 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 73319 00:07:19.781 ************************************ 00:07:19.781 END TEST bdev_nbd 00:07:19.781 ************************************ 00:07:19.781 21:49:04 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:19.781 00:07:19.781 real 0m10.741s 00:07:19.781 user 0m14.918s 00:07:19.781 sys 0m3.537s 00:07:19.781 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:19.781 21:49:04 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:20.042 21:49:04 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:20.042 21:49:04 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:20.042 21:49:04 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:20.042 skipping fio tests on NVMe due to multi-ns failures. 00:07:20.042 21:49:04 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:20.042 21:49:04 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:20.042 21:49:04 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:20.042 21:49:04 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.042 21:49:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.042 ************************************ 00:07:20.042 START TEST bdev_verify 00:07:20.042 ************************************ 00:07:20.042 21:49:04 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:20.042 [2024-09-30 21:49:04.689793] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:20.042 [2024-09-30 21:49:04.689931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73703 ] 00:07:20.042 [2024-09-30 21:49:04.824625] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.042 [2024-09-30 21:49:04.837642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:20.303 [2024-09-30 21:49:04.897318] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.303 [2024-09-30 21:49:04.897373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.563 Running I/O for 5 seconds... 00:07:25.797 17152.00 IOPS, 67.00 MiB/s 17632.00 IOPS, 68.88 MiB/s 17856.00 IOPS, 69.75 MiB/s 18016.00 IOPS, 70.38 MiB/s 18163.20 IOPS, 70.95 MiB/s 00:07:25.797 Latency(us) 00:07:25.797 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.797 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.797 Verification LBA range: start 0x0 length 0xbd0bd 00:07:25.797 Nvme0n1 : 5.05 1494.01 5.84 0.00 0.00 85407.21 17341.83 94371.84 00:07:25.797 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.797 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:25.797 Nvme0n1 : 5.05 1470.34 5.74 0.00 0.00 86654.12 17442.66 89128.96 00:07:25.797 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.797 Verification LBA range: start 0x0 length 0xa0000 00:07:25.797 Nvme1n1 : 5.06 1493.56 5.83 0.00 0.00 85281.60 17442.66 86305.87 00:07:25.797 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.797 Verification LBA range: start 0xa0000 length 0xa0000 00:07:25.797 Nvme1n1 : 5.07 1475.91 5.77 0.00 0.00 86258.60 5444.53 82676.18 00:07:25.797 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.797 Verification LBA range: start 0x0 length 0x80000 00:07:25.797 Nvme2n1 : 5.06 1493.12 5.83 0.00 0.00 84993.89 16938.54 69770.63 00:07:25.797 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.797 Verification LBA range: start 0x80000 length 0x80000 00:07:25.797 Nvme2n1 : 5.10 1482.14 5.79 0.00 0.00 85657.91 18854.20 75820.11 00:07:25.797 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.797 Verification LBA range: start 0x0 length 0x80000 00:07:25.797 Nvme2n2 : 5.10 1506.12 5.88 0.00 0.00 84113.26 11897.30 64124.46 00:07:25.797 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.797 Verification LBA range: start 0x80000 length 0x80000 00:07:25.798 Nvme2n2 : 5.10 1481.67 5.79 0.00 0.00 85470.71 18249.26 75013.51 00:07:25.798 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.798 Verification LBA range: start 0x0 length 0x80000 00:07:25.798 Nvme2n3 : 5.10 1505.26 5.88 0.00 0.00 84018.66 13611.32 67754.14 00:07:25.798 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.798 Verification LBA range: start 0x80000 length 0x80000 00:07:25.798 Nvme2n3 : 5.10 1481.14 5.79 0.00 0.00 85288.81 16434.41 75416.81 00:07:25.798 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:25.798 Verification LBA range: start 0x0 length 0x20000 00:07:25.798 Nvme3n1 : 5.10 1504.52 5.88 0.00 0.00 83905.01 14014.62 67350.84 00:07:25.798 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:25.798 Verification LBA range: start 0x20000 length 0x20000 00:07:25.798 Nvme3n1 : 5.10 1480.33 5.78 0.00 0.00 85181.67 13812.97 79853.10 00:07:25.798 =================================================================================================================== 00:07:25.798 Total : 17868.13 69.80 0.00 0.00 85178.17 5444.53 94371.84 00:07:26.371 00:07:26.371 real 0m6.462s 00:07:26.371 user 0m11.901s 00:07:26.371 sys 0m0.306s 00:07:26.371 21:49:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.371 21:49:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:26.371 ************************************ 00:07:26.371 END TEST bdev_verify 00:07:26.371 ************************************ 00:07:26.371 21:49:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:26.371 21:49:11 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:26.371 21:49:11 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.371 21:49:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.372 ************************************ 00:07:26.372 START TEST bdev_verify_big_io 00:07:26.372 ************************************ 00:07:26.372 21:49:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:26.634 [2024-09-30 21:49:11.221078] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:26.634 [2024-09-30 21:49:11.221252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73796 ] 00:07:26.634 [2024-09-30 21:49:11.355372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.634 [2024-09-30 21:49:11.373685] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:26.634 [2024-09-30 21:49:11.437475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.634 [2024-09-30 21:49:11.437599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.207 Running I/O for 5 seconds... 00:07:33.315 814.00 IOPS, 50.88 MiB/s 2243.50 IOPS, 140.22 MiB/s 2758.33 IOPS, 172.40 MiB/s 00:07:33.315 Latency(us) 00:07:33.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.315 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x0 length 0xbd0b 00:07:33.315 Nvme0n1 : 5.79 127.27 7.95 0.00 0.00 950335.27 16736.89 1071160.71 00:07:33.315 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:33.315 Nvme0n1 : 5.65 113.35 7.08 0.00 0.00 1088784.70 29037.49 1058255.16 00:07:33.315 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x0 length 0xa000 00:07:33.315 Nvme1n1 : 5.79 128.27 8.02 0.00 0.00 916813.56 106470.79 896935.78 00:07:33.315 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0xa000 length 0xa000 00:07:33.315 Nvme1n1 : 5.65 113.29 7.08 0.00 0.00 1054356.72 96791.63 987274.63 00:07:33.315 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x0 length 0x8000 00:07:33.315 Nvme2n1 : 5.87 131.56 8.22 0.00 0.00 871317.45 130668.70 716258.07 00:07:33.315 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x8000 length 0x8000 00:07:33.315 Nvme2n1 : 5.79 114.62 7.16 0.00 0.00 1003096.20 139541.27 916294.10 00:07:33.315 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x0 length 0x8000 00:07:33.315 Nvme2n2 : 5.94 138.13 8.63 0.00 0.00 814516.39 15829.46 1232480.10 00:07:33.315 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x8000 length 0x8000 00:07:33.315 Nvme2n2 : 5.93 125.16 7.82 0.00 0.00 903723.37 27625.94 974369.08 00:07:33.315 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x0 length 0x8000 00:07:33.315 Nvme2n3 : 5.96 137.67 8.60 0.00 0.00 792045.47 39119.95 1742249.35 00:07:33.315 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x8000 length 0x8000 00:07:33.315 Nvme2n3 : 5.93 129.45 8.09 0.00 0.00 850603.45 45774.38 1000180.18 00:07:33.315 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x0 length 0x2000 00:07:33.315 Nvme3n1 : 5.97 153.97 9.62 0.00 0.00 688254.68 5142.06 1768060.46 00:07:33.315 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:33.315 Verification LBA range: start 0x2000 length 0x2000 00:07:33.315 Nvme3n1 : 5.94 139.98 8.75 0.00 0.00 763241.54 1625.80 1025991.29 00:07:33.315 =================================================================================================================== 00:07:33.315 Total : 1552.73 97.05 0.00 0.00 879481.47 1625.80 1768060.46 00:07:34.261 00:07:34.261 real 0m7.577s 00:07:34.261 user 0m13.804s 00:07:34.261 sys 0m0.338s 00:07:34.261 21:49:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.261 ************************************ 00:07:34.261 END TEST bdev_verify_big_io 00:07:34.261 ************************************ 00:07:34.261 21:49:18 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:34.261 21:49:18 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.261 21:49:18 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:34.261 21:49:18 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.261 21:49:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.261 ************************************ 00:07:34.261 START TEST bdev_write_zeroes 00:07:34.261 ************************************ 00:07:34.261 21:49:18 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.261 [2024-09-30 21:49:18.863733] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:34.261 [2024-09-30 21:49:18.863885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73895 ] 00:07:34.261 [2024-09-30 21:49:18.997282] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.261 [2024-09-30 21:49:19.016868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.261 [2024-09-30 21:49:19.071774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.832 Running I/O for 1 seconds... 00:07:35.774 47125.00 IOPS, 184.08 MiB/s 00:07:35.774 Latency(us) 00:07:35.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.774 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.774 Nvme0n1 : 1.02 7855.32 30.68 0.00 0.00 16255.58 5444.53 34482.02 00:07:35.774 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.774 Nvme1n1 : 1.02 7887.93 30.81 0.00 0.00 16169.36 10637.00 26416.05 00:07:35.774 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.774 Nvme2n1 : 1.02 7878.52 30.78 0.00 0.00 16104.49 10687.41 25609.45 00:07:35.774 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.774 Nvme2n2 : 1.02 7869.33 30.74 0.00 0.00 16073.90 10082.46 25004.50 00:07:35.774 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.774 Nvme2n3 : 1.03 7860.02 30.70 0.00 0.00 16038.12 7461.02 25306.98 00:07:35.774 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:35.774 Nvme3n1 : 1.03 7788.40 30.42 0.00 0.00 16145.24 9376.69 25508.63 00:07:35.774 =================================================================================================================== 00:07:35.774 Total : 47139.52 184.14 0.00 0.00 16130.99 5444.53 34482.02 00:07:36.035 ************************************ 00:07:36.035 END TEST bdev_write_zeroes 00:07:36.035 ************************************ 00:07:36.035 00:07:36.035 real 0m2.010s 00:07:36.035 user 0m1.640s 00:07:36.035 sys 0m0.249s 00:07:36.035 21:49:20 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.035 21:49:20 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:36.297 21:49:20 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.297 21:49:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:36.297 21:49:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.297 21:49:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.297 ************************************ 00:07:36.297 START TEST bdev_json_nonenclosed 00:07:36.297 ************************************ 00:07:36.297 21:49:20 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.297 [2024-09-30 21:49:20.931205] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:36.297 [2024-09-30 21:49:20.931361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73939 ] 00:07:36.297 [2024-09-30 21:49:21.065058] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.297 [2024-09-30 21:49:21.085406] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.559 [2024-09-30 21:49:21.143391] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.559 [2024-09-30 21:49:21.143514] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:36.559 [2024-09-30 21:49:21.143534] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:36.559 [2024-09-30 21:49:21.143549] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:36.559 00:07:36.559 real 0m0.403s 00:07:36.559 user 0m0.180s 00:07:36.559 sys 0m0.118s 00:07:36.559 21:49:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.559 ************************************ 00:07:36.559 END TEST bdev_json_nonenclosed 00:07:36.559 ************************************ 00:07:36.559 21:49:21 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:36.559 21:49:21 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.559 21:49:21 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:36.559 21:49:21 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.559 21:49:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.559 ************************************ 00:07:36.559 START TEST bdev_json_nonarray 00:07:36.559 ************************************ 00:07:36.559 21:49:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.821 [2024-09-30 21:49:21.398909] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:36.821 [2024-09-30 21:49:21.399072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73959 ] 00:07:36.821 [2024-09-30 21:49:21.533675] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.821 [2024-09-30 21:49:21.552486] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.821 [2024-09-30 21:49:21.615139] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.821 [2024-09-30 21:49:21.615293] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:36.822 [2024-09-30 21:49:21.615314] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:36.822 [2024-09-30 21:49:21.615325] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.084 00:07:37.084 real 0m0.410s 00:07:37.084 user 0m0.171s 00:07:37.084 sys 0m0.132s 00:07:37.084 21:49:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.084 ************************************ 00:07:37.084 END TEST bdev_json_nonarray 00:07:37.084 ************************************ 00:07:37.084 21:49:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:37.084 21:49:21 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:37.084 00:07:37.084 real 0m32.024s 00:07:37.084 user 0m48.443s 00:07:37.084 sys 0m5.799s 00:07:37.084 ************************************ 00:07:37.084 END TEST blockdev_nvme 00:07:37.084 21:49:21 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.084 21:49:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:37.084 ************************************ 00:07:37.084 21:49:21 -- spdk/autotest.sh@209 -- # uname -s 00:07:37.084 21:49:21 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:37.084 21:49:21 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:37.084 21:49:21 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:37.084 21:49:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.084 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:07:37.084 ************************************ 00:07:37.084 START TEST blockdev_nvme_gpt 00:07:37.085 ************************************ 00:07:37.085 21:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:37.347 * Looking for test storage... 00:07:37.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:37.347 21:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:37.347 21:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:37.347 21:49:21 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:37.347 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.347 21:49:22 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:37.347 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.347 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.347 --rc genhtml_branch_coverage=1 00:07:37.347 --rc genhtml_function_coverage=1 00:07:37.347 --rc genhtml_legend=1 00:07:37.347 --rc geninfo_all_blocks=1 00:07:37.347 --rc geninfo_unexecuted_blocks=1 00:07:37.347 00:07:37.347 ' 00:07:37.347 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.347 --rc genhtml_branch_coverage=1 00:07:37.347 --rc genhtml_function_coverage=1 00:07:37.347 --rc genhtml_legend=1 00:07:37.347 --rc geninfo_all_blocks=1 00:07:37.347 --rc geninfo_unexecuted_blocks=1 00:07:37.347 00:07:37.347 ' 00:07:37.347 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.347 --rc genhtml_branch_coverage=1 00:07:37.347 --rc genhtml_function_coverage=1 00:07:37.347 --rc genhtml_legend=1 00:07:37.347 --rc geninfo_all_blocks=1 00:07:37.347 --rc geninfo_unexecuted_blocks=1 00:07:37.347 00:07:37.347 ' 00:07:37.347 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:37.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.347 --rc genhtml_branch_coverage=1 00:07:37.347 --rc genhtml_function_coverage=1 00:07:37.347 --rc genhtml_legend=1 00:07:37.347 --rc geninfo_all_blocks=1 00:07:37.347 --rc geninfo_unexecuted_blocks=1 00:07:37.347 00:07:37.347 ' 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:37.347 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74043 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 74043 00:07:37.348 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 74043 ']' 00:07:37.348 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.348 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.348 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.348 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.348 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.348 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:37.348 [2024-09-30 21:49:22.133850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:37.348 [2024-09-30 21:49:22.134469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74043 ] 00:07:37.609 [2024-09-30 21:49:22.268418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.609 [2024-09-30 21:49:22.281875] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.609 [2024-09-30 21:49:22.341709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.183 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:38.183 21:49:22 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:38.183 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:38.183 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:38.183 21:49:22 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:38.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.757 Waiting for block devices as requested 00:07:38.757 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.018 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.018 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:39.018 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.313 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:44.313 BYT; 00:07:44.313 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:44.313 BYT; 00:07:44.313 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:44.313 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:44.314 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:44.314 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:44.314 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:44.314 21:49:28 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:44.314 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:44.314 21:49:28 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:45.258 The operation has completed successfully. 00:07:45.258 21:49:30 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:46.640 The operation has completed successfully. 00:07:46.640 21:49:31 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:46.900 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.470 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:47.470 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:47.470 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:47.470 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:47.731 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:47.731 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.731 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.731 [] 00:07:47.731 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.731 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:47.731 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:47.731 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:47.731 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.731 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:47.731 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.731 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.993 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:47.993 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:47.994 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0a5a0d0f-3a3d-440e-bddf-4a0a2912c950"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0a5a0d0f-3a3d-440e-bddf-4a0a2912c950",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d4568f71-a935-47e3-a62c-5906af7aa5d1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d4568f71-a935-47e3-a62c-5906af7aa5d1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8ff585ad-f318-4cd5-8b2c-772d671b5e6e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ff585ad-f318-4cd5-8b2c-772d671b5e6e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "894fb2b8-5cf8-44e3-a166-3f9a3be84a5d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "894fb2b8-5cf8-44e3-a166-3f9a3be84a5d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "83b2d9d1-3226-4f6c-bdc6-496c905f5abb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "83b2d9d1-3226-4f6c-bdc6-496c905f5abb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:48.255 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:48.255 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:48.255 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:48.255 21:49:32 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 74043 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 74043 ']' 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 74043 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74043 00:07:48.255 killing process with pid 74043 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74043' 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 74043 00:07:48.255 21:49:32 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 74043 00:07:48.517 21:49:33 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:48.517 21:49:33 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:48.517 21:49:33 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:48.517 21:49:33 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.517 21:49:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.517 ************************************ 00:07:48.517 START TEST bdev_hello_world 00:07:48.517 ************************************ 00:07:48.517 21:49:33 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:48.778 [2024-09-30 21:49:33.386293] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:48.778 [2024-09-30 21:49:33.386450] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74663 ] 00:07:48.778 [2024-09-30 21:49:33.519609] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.778 [2024-09-30 21:49:33.539524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.778 [2024-09-30 21:49:33.581390] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.350 [2024-09-30 21:49:33.959823] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:49.350 [2024-09-30 21:49:33.959880] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:49.350 [2024-09-30 21:49:33.959900] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:49.350 [2024-09-30 21:49:33.961982] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:49.350 [2024-09-30 21:49:33.962506] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:49.350 [2024-09-30 21:49:33.962534] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:49.350 [2024-09-30 21:49:33.963208] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:49.350 00:07:49.350 [2024-09-30 21:49:33.963237] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:49.350 00:07:49.350 real 0m0.816s 00:07:49.350 user 0m0.518s 00:07:49.350 sys 0m0.193s 00:07:49.350 21:49:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.350 ************************************ 00:07:49.350 END TEST bdev_hello_world 00:07:49.350 ************************************ 00:07:49.350 21:49:34 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:49.612 21:49:34 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:49.612 21:49:34 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:49.612 21:49:34 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.612 21:49:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.612 ************************************ 00:07:49.612 START TEST bdev_bounds 00:07:49.612 ************************************ 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:49.612 Process bdevio pid: 74693 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74693 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74693' 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74693 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 74693 ']' 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:49.612 21:49:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:49.612 [2024-09-30 21:49:34.252466] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:49.612 [2024-09-30 21:49:34.252582] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74693 ] 00:07:49.612 [2024-09-30 21:49:34.388999] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:49.612 [2024-09-30 21:49:34.407268] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.873 [2024-09-30 21:49:34.444721] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.873 [2024-09-30 21:49:34.445206] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.873 [2024-09-30 21:49:34.445372] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.445 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.445 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:50.445 21:49:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:50.445 I/O targets: 00:07:50.445 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:50.445 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:50.445 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:50.445 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:50.445 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:50.445 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:50.445 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:50.445 00:07:50.445 00:07:50.445 CUnit - A unit testing framework for C - Version 2.1-3 00:07:50.445 http://cunit.sourceforge.net/ 00:07:50.445 00:07:50.445 00:07:50.445 Suite: bdevio tests on: Nvme3n1 00:07:50.445 Test: blockdev write read block ...passed 00:07:50.445 Test: blockdev write zeroes read block ...passed 00:07:50.445 Test: blockdev write zeroes read no split ...passed 00:07:50.445 Test: blockdev write zeroes read split ...passed 00:07:50.445 Test: blockdev write zeroes read split partial ...passed 00:07:50.446 Test: blockdev reset ...[2024-09-30 21:49:35.240456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:50.446 passed 00:07:50.446 Test: blockdev write read 8 blocks ...[2024-09-30 21:49:35.245140] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.446 passed 00:07:50.446 Test: blockdev write read size > 128k ...passed 00:07:50.446 Test: blockdev write read invalid size ...passed 00:07:50.446 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.446 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.446 Test: blockdev write read max offset ...passed 00:07:50.446 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.446 Test: blockdev writev readv 8 blocks ...passed 00:07:50.446 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.446 Test: blockdev writev readv block ...passed 00:07:50.446 Test: blockdev writev readv size > 128k ...passed 00:07:50.707 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.707 Test: blockdev comparev and writev ...[2024-09-30 21:49:35.261204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cba0e000 len:0x1000 00:07:50.707 [2024-09-30 21:49:35.261255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.707 passed 00:07:50.707 Test: blockdev nvme passthru rw ...passed 00:07:50.707 Test: blockdev nvme passthru vendor specific ...[2024-09-30 21:49:35.263717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:50.707 [2024-09-30 21:49:35.263754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.707 passed 00:07:50.707 Test: blockdev nvme admin passthru ...passed 00:07:50.707 Test: blockdev copy ...passed 00:07:50.707 Suite: bdevio tests on: Nvme2n3 00:07:50.707 Test: blockdev write read block ...passed 00:07:50.707 Test: blockdev write zeroes read block ...passed 00:07:50.707 Test: blockdev write zeroes read no split ...passed 00:07:50.707 Test: blockdev write zeroes read split ...passed 00:07:50.707 Test: blockdev write zeroes read split partial ...passed 00:07:50.707 Test: blockdev reset ...[2024-09-30 21:49:35.294279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:50.707 [2024-09-30 21:49:35.298054] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.707 passed 00:07:50.707 Test: blockdev write read 8 blocks ...passed 00:07:50.707 Test: blockdev write read size > 128k ...passed 00:07:50.707 Test: blockdev write read invalid size ...passed 00:07:50.707 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.707 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.707 Test: blockdev write read max offset ...passed 00:07:50.707 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.707 Test: blockdev writev readv 8 blocks ...passed 00:07:50.707 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.707 Test: blockdev writev readv block ...passed 00:07:50.707 Test: blockdev writev readv size > 128k ...passed 00:07:50.707 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.707 Test: blockdev comparev and writev ...[2024-09-30 21:49:35.318436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cba0a000 len:0x1000 00:07:50.707 [2024-09-30 21:49:35.318564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.707 passed 00:07:50.707 Test: blockdev nvme passthru rw ...passed 00:07:50.707 Test: blockdev nvme passthru vendor specific ...[2024-09-30 21:49:35.322422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:50.707 [2024-09-30 21:49:35.322546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.707 passed 00:07:50.707 Test: blockdev nvme admin passthru ...passed 00:07:50.707 Test: blockdev copy ...passed 00:07:50.707 Suite: bdevio tests on: Nvme2n2 00:07:50.707 Test: blockdev write read block ...passed 00:07:50.707 Test: blockdev write zeroes read block ...passed 00:07:50.707 Test: blockdev write zeroes read no split ...passed 00:07:50.707 Test: blockdev write zeroes read split ...passed 00:07:50.707 Test: blockdev write zeroes read split partial ...passed 00:07:50.707 Test: blockdev reset ...[2024-09-30 21:49:35.346704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:50.707 [2024-09-30 21:49:35.350601] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.707 passed 00:07:50.707 Test: blockdev write read 8 blocks ...passed 00:07:50.707 Test: blockdev write read size > 128k ...passed 00:07:50.707 Test: blockdev write read invalid size ...passed 00:07:50.707 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.707 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.707 Test: blockdev write read max offset ...passed 00:07:50.707 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.707 Test: blockdev writev readv 8 blocks ...passed 00:07:50.707 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.707 Test: blockdev writev readv block ...passed 00:07:50.707 Test: blockdev writev readv size > 128k ...passed 00:07:50.707 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.707 Test: blockdev comparev and writev ...[2024-09-30 21:49:35.369170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ce005000 len:0x1000 00:07:50.707 [2024-09-30 21:49:35.369231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.707 passed 00:07:50.707 Test: blockdev nvme passthru rw ...passed 00:07:50.707 Test: blockdev nvme passthru vendor specific ...[2024-09-30 21:49:35.371202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:50.707 [2024-09-30 21:49:35.371231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.707 passed 00:07:50.707 Test: blockdev nvme admin passthru ...passed 00:07:50.707 Test: blockdev copy ...passed 00:07:50.707 Suite: bdevio tests on: Nvme2n1 00:07:50.707 Test: blockdev write read block ...passed 00:07:50.707 Test: blockdev write zeroes read block ...passed 00:07:50.707 Test: blockdev write zeroes read no split ...passed 00:07:50.707 Test: blockdev write zeroes read split ...passed 00:07:50.707 Test: blockdev write zeroes read split partial ...passed 00:07:50.707 Test: blockdev reset ...[2024-09-30 21:49:35.395900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:50.707 passed 00:07:50.707 Test: blockdev write read 8 blocks ...[2024-09-30 21:49:35.400868] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.707 passed 00:07:50.707 Test: blockdev write read size > 128k ...passed 00:07:50.707 Test: blockdev write read invalid size ...passed 00:07:50.707 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.707 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.707 Test: blockdev write read max offset ...passed 00:07:50.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.708 Test: blockdev writev readv 8 blocks ...passed 00:07:50.708 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.708 Test: blockdev writev readv block ...passed 00:07:50.708 Test: blockdev writev readv size > 128k ...passed 00:07:50.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.708 Test: blockdev comparev and writev ...[2024-09-30 21:49:35.418196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf202000 len:0x1000 00:07:50.708 [2024-09-30 21:49:35.418254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.708 passed 00:07:50.708 Test: blockdev nvme passthru rw ...passed 00:07:50.708 Test: blockdev nvme passthru vendor specific ...passed 00:07:50.708 Test: blockdev nvme admin passthru ...[2024-09-30 21:49:35.421314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:50.708 [2024-09-30 21:49:35.421353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:50.708 passed 00:07:50.708 Test: blockdev copy ...passed 00:07:50.708 Suite: bdevio tests on: Nvme1n1p2 00:07:50.708 Test: blockdev write read block ...passed 00:07:50.708 Test: blockdev write zeroes read block ...passed 00:07:50.708 Test: blockdev write zeroes read no split ...passed 00:07:50.708 Test: blockdev write zeroes read split ...passed 00:07:50.708 Test: blockdev write zeroes read split partial ...passed 00:07:50.708 Test: blockdev reset ...[2024-09-30 21:49:35.448198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:50.708 passed 00:07:50.708 Test: blockdev write read 8 blocks ...[2024-09-30 21:49:35.451496] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.708 passed 00:07:50.708 Test: blockdev write read size > 128k ...passed 00:07:50.708 Test: blockdev write read invalid size ...passed 00:07:50.708 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.708 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.708 Test: blockdev write read max offset ...passed 00:07:50.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.708 Test: blockdev writev readv 8 blocks ...passed 00:07:50.708 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.708 Test: blockdev writev readv block ...passed 00:07:50.708 Test: blockdev writev readv size > 128k ...passed 00:07:50.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.708 Test: blockdev comparev and writev ...[2024-09-30 21:49:35.469754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d1a3b000 len:0x1000 00:07:50.708 [2024-09-30 21:49:35.469804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.708 passed 00:07:50.708 Test: blockdev nvme passthru rw ...passed 00:07:50.708 Test: blockdev nvme passthru vendor specific ...passed 00:07:50.708 Test: blockdev nvme admin passthru ...passed 00:07:50.708 Test: blockdev copy ...passed 00:07:50.708 Suite: bdevio tests on: Nvme1n1p1 00:07:50.708 Test: blockdev write read block ...passed 00:07:50.708 Test: blockdev write zeroes read block ...passed 00:07:50.708 Test: blockdev write zeroes read no split ...passed 00:07:50.708 Test: blockdev write zeroes read split ...passed 00:07:50.708 Test: blockdev write zeroes read split partial ...passed 00:07:50.708 Test: blockdev reset ...[2024-09-30 21:49:35.491540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:50.708 [2024-09-30 21:49:35.494833] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.708 passed 00:07:50.708 Test: blockdev write read 8 blocks ...passed 00:07:50.708 Test: blockdev write read size > 128k ...passed 00:07:50.708 Test: blockdev write read invalid size ...passed 00:07:50.708 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.708 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.708 Test: blockdev write read max offset ...passed 00:07:50.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.708 Test: blockdev writev readv 8 blocks ...passed 00:07:50.708 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.708 Test: blockdev writev readv block ...passed 00:07:50.708 Test: blockdev writev readv size > 128k ...passed 00:07:50.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.708 Test: blockdev comparev and writev ...[2024-09-30 21:49:35.513823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2d1a37000 len:0x1000 00:07:50.708 [2024-09-30 21:49:35.513869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:50.708 passed 00:07:50.708 Test: blockdev nvme passthru rw ...passed 00:07:50.708 Test: blockdev nvme passthru vendor specific ...passed 00:07:50.708 Test: blockdev nvme admin passthru ...passed 00:07:50.708 Test: blockdev copy ...passed 00:07:50.708 Suite: bdevio tests on: Nvme0n1 00:07:50.708 Test: blockdev write read block ...passed 00:07:50.969 Test: blockdev write zeroes read block ...passed 00:07:50.969 Test: blockdev write zeroes read no split ...passed 00:07:50.969 Test: blockdev write zeroes read split ...passed 00:07:50.969 Test: blockdev write zeroes read split partial ...passed 00:07:50.969 Test: blockdev reset ...[2024-09-30 21:49:35.537426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:50.969 passed 00:07:50.969 Test: blockdev write read 8 blocks ...[2024-09-30 21:49:35.540156] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:50.969 passed 00:07:50.969 Test: blockdev write read size > 128k ...passed 00:07:50.969 Test: blockdev write read invalid size ...passed 00:07:50.969 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:50.969 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:50.969 Test: blockdev write read max offset ...passed 00:07:50.969 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:50.969 Test: blockdev writev readv 8 blocks ...passed 00:07:50.969 Test: blockdev writev readv 30 x 1block ...passed 00:07:50.969 Test: blockdev writev readv block ...passed 00:07:50.969 Test: blockdev writev readv size > 128k ...passed 00:07:50.969 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:50.969 Test: blockdev comparev and writev ...passed 00:07:50.969 Test: blockdev nvme passthru rw ...[2024-09-30 21:49:35.554737] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:50.969 separate metadata which is not supported yet. 00:07:50.969 passed 00:07:50.969 Test: blockdev nvme passthru vendor specific ...[2024-09-30 21:49:35.556795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:50.969 passed 00:07:50.969 Test: blockdev nvme admin passthru ...[2024-09-30 21:49:35.556837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:50.969 passed 00:07:50.969 Test: blockdev copy ...passed 00:07:50.969 00:07:50.969 Run Summary: Type Total Ran Passed Failed Inactive 00:07:50.969 suites 7 7 n/a 0 0 00:07:50.969 tests 161 161 161 0 0 00:07:50.969 asserts 1025 1025 1025 0 n/a 00:07:50.969 00:07:50.969 Elapsed time = 0.756 seconds 00:07:50.969 0 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74693 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 74693 ']' 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 74693 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74693 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.969 killing process with pid 74693 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74693' 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 74693 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 74693 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:50.969 00:07:50.969 real 0m1.585s 00:07:50.969 user 0m3.895s 00:07:50.969 sys 0m0.298s 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.969 ************************************ 00:07:50.969 END TEST bdev_bounds 00:07:50.969 ************************************ 00:07:50.969 21:49:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:51.230 21:49:35 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:51.230 21:49:35 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:51.230 21:49:35 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.230 21:49:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.230 ************************************ 00:07:51.230 START TEST bdev_nbd 00:07:51.230 ************************************ 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74737 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74737 /var/tmp/spdk-nbd.sock 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 74737 ']' 00:07:51.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:51.230 21:49:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:51.231 [2024-09-30 21:49:35.916424] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:07:51.231 [2024-09-30 21:49:35.916549] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.490 [2024-09-30 21:49:36.049860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:51.490 [2024-09-30 21:49:36.062604] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.490 [2024-09-30 21:49:36.102036] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.061 21:49:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.322 1+0 records in 00:07:52.322 1+0 records out 00:07:52.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734198 s, 5.6 MB/s 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.322 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.584 1+0 records in 00:07:52.584 1+0 records out 00:07:52.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106306 s, 3.9 MB/s 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.584 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.846 1+0 records in 00:07:52.846 1+0 records out 00:07:52.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010146 s, 4.0 MB/s 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.846 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.108 1+0 records in 00:07:53.108 1+0 records out 00:07:53.108 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119154 s, 3.4 MB/s 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.108 21:49:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.369 1+0 records in 00:07:53.369 1+0 records out 00:07:53.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850077 s, 4.8 MB/s 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.369 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.370 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.631 1+0 records in 00:07:53.631 1+0 records out 00:07:53.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113168 s, 3.6 MB/s 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.631 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.892 1+0 records in 00:07:53.892 1+0 records out 00:07:53.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000949722 s, 4.3 MB/s 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.892 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.893 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.154 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:54.154 { 00:07:54.154 "nbd_device": "/dev/nbd0", 00:07:54.154 "bdev_name": "Nvme0n1" 00:07:54.154 }, 00:07:54.154 { 00:07:54.154 "nbd_device": "/dev/nbd1", 00:07:54.154 "bdev_name": "Nvme1n1p1" 00:07:54.154 }, 00:07:54.154 { 00:07:54.154 "nbd_device": "/dev/nbd2", 00:07:54.154 "bdev_name": "Nvme1n1p2" 00:07:54.154 }, 00:07:54.154 { 00:07:54.154 "nbd_device": "/dev/nbd3", 00:07:54.154 "bdev_name": "Nvme2n1" 00:07:54.154 }, 00:07:54.154 { 00:07:54.155 "nbd_device": "/dev/nbd4", 00:07:54.155 "bdev_name": "Nvme2n2" 00:07:54.155 }, 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd5", 00:07:54.155 "bdev_name": "Nvme2n3" 00:07:54.155 }, 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd6", 00:07:54.155 "bdev_name": "Nvme3n1" 00:07:54.155 } 00:07:54.155 ]' 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd0", 00:07:54.155 "bdev_name": "Nvme0n1" 00:07:54.155 }, 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd1", 00:07:54.155 "bdev_name": "Nvme1n1p1" 00:07:54.155 }, 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd2", 00:07:54.155 "bdev_name": "Nvme1n1p2" 00:07:54.155 }, 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd3", 00:07:54.155 "bdev_name": "Nvme2n1" 00:07:54.155 }, 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd4", 00:07:54.155 "bdev_name": "Nvme2n2" 00:07:54.155 }, 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd5", 00:07:54.155 "bdev_name": "Nvme2n3" 00:07:54.155 }, 00:07:54.155 { 00:07:54.155 "nbd_device": "/dev/nbd6", 00:07:54.155 "bdev_name": "Nvme3n1" 00:07:54.155 } 00:07:54.155 ]' 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.155 21:49:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.416 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.678 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.938 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.198 21:49:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.459 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.721 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:55.981 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:55.982 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:56.295 /dev/nbd0 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.295 1+0 records in 00:07:56.295 1+0 records out 00:07:56.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120149 s, 3.4 MB/s 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.295 21:49:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:56.567 /dev/nbd1 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.567 1+0 records in 00:07:56.567 1+0 records out 00:07:56.567 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101145 s, 4.0 MB/s 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.567 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:56.828 /dev/nbd10 00:07:56.828 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:56.828 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:56.828 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.829 1+0 records in 00:07:56.829 1+0 records out 00:07:56.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118975 s, 3.4 MB/s 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.829 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:57.090 /dev/nbd11 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.090 1+0 records in 00:07:57.090 1+0 records out 00:07:57.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103299 s, 4.0 MB/s 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.090 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:57.349 /dev/nbd12 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.349 1+0 records in 00:07:57.349 1+0 records out 00:07:57.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102893 s, 4.0 MB/s 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.349 21:49:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:57.607 /dev/nbd13 00:07:57.607 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:57.607 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:57.607 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:57.607 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:57.607 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.607 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.607 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:57.607 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.608 1+0 records in 00:07:57.608 1+0 records out 00:07:57.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00144034 s, 2.8 MB/s 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.608 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:57.866 /dev/nbd14 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.866 1+0 records in 00:07:57.866 1+0 records out 00:07:57.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00133614 s, 3.1 MB/s 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd0", 00:07:57.866 "bdev_name": "Nvme0n1" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd1", 00:07:57.866 "bdev_name": "Nvme1n1p1" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd10", 00:07:57.866 "bdev_name": "Nvme1n1p2" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd11", 00:07:57.866 "bdev_name": "Nvme2n1" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd12", 00:07:57.866 "bdev_name": "Nvme2n2" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd13", 00:07:57.866 "bdev_name": "Nvme2n3" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd14", 00:07:57.866 "bdev_name": "Nvme3n1" 00:07:57.866 } 00:07:57.866 ]' 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd0", 00:07:57.866 "bdev_name": "Nvme0n1" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd1", 00:07:57.866 "bdev_name": "Nvme1n1p1" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd10", 00:07:57.866 "bdev_name": "Nvme1n1p2" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd11", 00:07:57.866 "bdev_name": "Nvme2n1" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd12", 00:07:57.866 "bdev_name": "Nvme2n2" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd13", 00:07:57.866 "bdev_name": "Nvme2n3" 00:07:57.866 }, 00:07:57.866 { 00:07:57.866 "nbd_device": "/dev/nbd14", 00:07:57.866 "bdev_name": "Nvme3n1" 00:07:57.866 } 00:07:57.866 ]' 00:07:57.866 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:58.124 /dev/nbd1 00:07:58.124 /dev/nbd10 00:07:58.124 /dev/nbd11 00:07:58.124 /dev/nbd12 00:07:58.124 /dev/nbd13 00:07:58.124 /dev/nbd14' 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:58.124 /dev/nbd1 00:07:58.124 /dev/nbd10 00:07:58.124 /dev/nbd11 00:07:58.124 /dev/nbd12 00:07:58.124 /dev/nbd13 00:07:58.124 /dev/nbd14' 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:58.124 256+0 records in 00:07:58.124 256+0 records out 00:07:58.124 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01214 s, 86.4 MB/s 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.124 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:58.381 256+0 records in 00:07:58.381 256+0 records out 00:07:58.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.252162 s, 4.2 MB/s 00:07:58.381 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.381 21:49:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:58.639 256+0 records in 00:07:58.639 256+0 records out 00:07:58.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.265703 s, 3.9 MB/s 00:07:58.639 21:49:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.639 21:49:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:58.898 256+0 records in 00:07:58.898 256+0 records out 00:07:58.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.269326 s, 3.9 MB/s 00:07:58.898 21:49:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.898 21:49:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:59.156 256+0 records in 00:07:59.156 256+0 records out 00:07:59.156 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244265 s, 4.3 MB/s 00:07:59.156 21:49:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.156 21:49:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:59.414 256+0 records in 00:07:59.414 256+0 records out 00:07:59.414 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.263971 s, 4.0 MB/s 00:07:59.414 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.414 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:59.671 256+0 records in 00:07:59.671 256+0 records out 00:07:59.671 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.227186 s, 4.6 MB/s 00:07:59.671 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.671 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:59.930 256+0 records in 00:07:59.930 256+0 records out 00:07:59.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238715 s, 4.4 MB/s 00:07:59.930 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:59.930 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:59.930 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.930 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:59.930 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.930 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.931 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.191 21:49:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.450 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.709 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.020 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.300 21:49:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.300 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.558 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:01.558 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:01.558 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.558 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:01.558 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.558 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:01.559 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:01.817 malloc_lvol_verify 00:08:01.817 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:02.078 99d9e0f7-7040-499f-b27d-ea3df19875d7 00:08:02.078 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:02.336 c2b42366-6d9e-4945-b086-dbab15c548a8 00:08:02.336 21:49:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:02.594 /dev/nbd0 00:08:02.594 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:02.594 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:02.594 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:02.594 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:02.595 mke2fs 1.47.0 (5-Feb-2023) 00:08:02.595 Discarding device blocks: 0/4096 done 00:08:02.595 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:02.595 00:08:02.595 Allocating group tables: 0/1 done 00:08:02.595 Writing inode tables: 0/1 done 00:08:02.595 Creating journal (1024 blocks): done 00:08:02.595 Writing superblocks and filesystem accounting information: 0/1 done 00:08:02.595 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.595 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74737 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 74737 ']' 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 74737 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74737 00:08:02.855 killing process with pid 74737 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74737' 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 74737 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 74737 00:08:02.855 ************************************ 00:08:02.855 END TEST bdev_nbd 00:08:02.855 ************************************ 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:02.855 00:08:02.855 real 0m11.780s 00:08:02.855 user 0m16.134s 00:08:02.855 sys 0m4.194s 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.855 21:49:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:03.117 21:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:03.117 21:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:03.117 skipping fio tests on NVMe due to multi-ns failures. 00:08:03.117 21:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:03.117 21:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:03.117 21:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:03.117 21:49:47 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:03.117 21:49:47 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:03.117 21:49:47 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.117 21:49:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.117 ************************************ 00:08:03.117 START TEST bdev_verify 00:08:03.117 ************************************ 00:08:03.117 21:49:47 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:03.117 [2024-09-30 21:49:47.757483] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:03.117 [2024-09-30 21:49:47.758015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75162 ] 00:08:03.117 [2024-09-30 21:49:47.887898] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.117 [2024-09-30 21:49:47.905625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.378 [2024-09-30 21:49:47.940834] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.378 [2024-09-30 21:49:47.940987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.640 Running I/O for 5 seconds... 00:08:08.803 18050.00 IOPS, 70.51 MiB/s 18080.50 IOPS, 70.63 MiB/s 18090.67 IOPS, 70.67 MiB/s 18094.25 IOPS, 70.68 MiB/s 18089.80 IOPS, 70.66 MiB/s 00:08:08.803 Latency(us) 00:08:08.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.803 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x0 length 0xbd0bd 00:08:08.803 Nvme0n1 : 5.08 1271.92 4.97 0.00 0.00 100402.77 18148.43 113730.17 00:08:08.803 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:08.803 Nvme0n1 : 5.08 1285.55 5.02 0.00 0.00 99335.50 22181.42 118569.75 00:08:08.803 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x0 length 0x4ff80 00:08:08.803 Nvme1n1p1 : 5.08 1270.78 4.96 0.00 0.00 100138.98 13409.67 116956.55 00:08:08.803 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:08.803 Nvme1n1p1 : 5.08 1284.70 5.02 0.00 0.00 99226.95 23290.49 121796.14 00:08:08.803 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x0 length 0x4ff7f 00:08:08.803 Nvme1n1p2 : 5.09 1270.30 4.96 0.00 0.00 99889.91 9376.69 119376.34 00:08:08.803 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:08.803 Nvme1n1p2 : 5.08 1284.20 5.02 0.00 0.00 99102.40 24097.08 120989.54 00:08:08.803 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x0 length 0x80000 00:08:08.803 Nvme2n1 : 5.09 1269.59 4.96 0.00 0.00 99607.18 11594.83 121796.14 00:08:08.803 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x80000 length 0x80000 00:08:08.803 Nvme2n1 : 5.09 1283.57 5.01 0.00 0.00 98970.33 24702.03 120182.94 00:08:08.803 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x0 length 0x80000 00:08:08.803 Nvme2n2 : 5.10 1269.34 4.96 0.00 0.00 99373.91 1310.72 126635.72 00:08:08.803 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x80000 length 0x80000 00:08:08.803 Nvme2n2 : 5.09 1283.12 5.01 0.00 0.00 98825.34 25306.98 118569.75 00:08:08.803 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x0 length 0x80000 00:08:08.803 Nvme2n3 : 5.10 1269.10 4.96 0.00 0.00 99284.12 1676.21 131475.30 00:08:08.803 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x80000 length 0x80000 00:08:08.803 Nvme2n3 : 5.09 1282.77 5.01 0.00 0.00 98727.58 24298.73 116956.55 00:08:08.803 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x0 length 0x20000 00:08:08.803 Nvme3n1 : 5.10 1269.24 4.96 0.00 0.00 99154.80 2104.71 137121.48 00:08:08.803 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:08.803 Verification LBA range: start 0x20000 length 0x20000 00:08:08.803 Nvme3n1 : 5.09 1282.05 5.01 0.00 0.00 98601.01 19156.68 119376.34 00:08:08.803 =================================================================================================================== 00:08:08.803 Total : 17876.23 69.83 0.00 0.00 99329.66 1310.72 137121.48 00:08:10.201 00:08:10.201 real 0m6.868s 00:08:10.201 user 0m12.354s 00:08:10.201 sys 0m0.228s 00:08:10.201 21:49:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.201 ************************************ 00:08:10.201 END TEST bdev_verify 00:08:10.201 ************************************ 00:08:10.201 21:49:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 21:49:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:10.201 21:49:54 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:08:10.201 21:49:54 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.201 21:49:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.201 ************************************ 00:08:10.201 START TEST bdev_verify_big_io 00:08:10.201 ************************************ 00:08:10.201 21:49:54 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:10.201 [2024-09-30 21:49:54.704436] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:10.201 [2024-09-30 21:49:54.704574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75260 ] 00:08:10.201 [2024-09-30 21:49:54.834499] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.201 [2024-09-30 21:49:54.854213] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:10.201 [2024-09-30 21:49:54.889723] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.201 [2024-09-30 21:49:54.889818] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.784 Running I/O for 5 seconds... 00:08:16.933 866.00 IOPS, 54.12 MiB/s 2523.50 IOPS, 157.72 MiB/s 00:08:16.933 Latency(us) 00:08:16.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.933 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0x0 length 0xbd0b 00:08:16.933 Nvme0n1 : 5.87 108.73 6.80 0.00 0.00 1098068.01 25206.15 1187310.67 00:08:16.933 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:16.933 Nvme0n1 : 5.97 98.62 6.16 0.00 0.00 1231946.36 16535.24 1755154.90 00:08:16.933 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0x0 length 0x4ff8 00:08:16.933 Nvme1n1p1 : 5.87 113.28 7.08 0.00 0.00 1041735.39 56865.08 1000180.18 00:08:16.933 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:16.933 Nvme1n1p1 : 5.97 104.77 6.55 0.00 0.00 1121024.77 28634.19 1167952.34 00:08:16.933 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0x0 length 0x4ff7 00:08:16.933 Nvme1n1p2 : 5.96 118.46 7.40 0.00 0.00 977542.49 43152.94 1077613.49 00:08:16.933 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:16.933 Nvme1n1p2 : 6.04 108.84 6.80 0.00 0.00 1051528.92 68560.74 1193763.45 00:08:16.933 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0x0 length 0x8000 00:08:16.933 Nvme2n1 : 5.99 122.50 7.66 0.00 0.00 923747.43 43354.58 1329271.73 00:08:16.933 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0x8000 length 0x8000 00:08:16.933 Nvme2n1 : 5.98 103.68 6.48 0.00 0.00 1074361.08 84289.38 1897115.96 00:08:16.933 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:16.933 Verification LBA range: start 0x0 length 0x8000 00:08:16.933 Nvme2n2 : 5.99 123.81 7.74 0.00 0.00 887293.85 41943.04 929199.66 00:08:16.933 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:16.934 Verification LBA range: start 0x8000 length 0x8000 00:08:16.934 Nvme2n2 : 6.08 112.70 7.04 0.00 0.00 965663.72 38111.70 1935832.62 00:08:16.934 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:16.934 Verification LBA range: start 0x0 length 0x8000 00:08:16.934 Nvme2n3 : 6.00 128.09 8.01 0.00 0.00 836416.46 27827.59 948557.98 00:08:16.934 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:16.934 Verification LBA range: start 0x8000 length 0x8000 00:08:16.934 Nvme2n3 : 6.11 118.28 7.39 0.00 0.00 889946.88 17442.66 1974549.27 00:08:16.934 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:16.934 Verification LBA range: start 0x0 length 0x2000 00:08:16.934 Nvme3n1 : 6.09 147.10 9.19 0.00 0.00 710531.19 2848.30 974369.08 00:08:16.934 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:16.934 Verification LBA range: start 0x2000 length 0x2000 00:08:16.934 Nvme3n1 : 6.17 147.37 9.21 0.00 0.00 699164.69 686.87 2013265.92 00:08:16.934 =================================================================================================================== 00:08:16.934 Total : 1656.22 103.51 0.00 0.00 946340.74 686.87 2013265.92 00:08:18.321 00:08:18.321 real 0m8.154s 00:08:18.321 user 0m14.711s 00:08:18.321 sys 0m0.260s 00:08:18.321 21:50:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.321 ************************************ 00:08:18.321 END TEST bdev_verify_big_io 00:08:18.321 ************************************ 00:08:18.321 21:50:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:18.321 21:50:02 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.321 21:50:02 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:18.321 21:50:02 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.321 21:50:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:18.321 ************************************ 00:08:18.321 START TEST bdev_write_zeroes 00:08:18.321 ************************************ 00:08:18.321 21:50:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.321 [2024-09-30 21:50:02.956078] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:18.321 [2024-09-30 21:50:02.956306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75364 ] 00:08:18.321 [2024-09-30 21:50:03.108460] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.321 [2024-09-30 21:50:03.129245] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.583 [2024-09-30 21:50:03.188296] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.869 Running I/O for 1 seconds... 00:08:20.255 41964.00 IOPS, 163.92 MiB/s 00:08:20.255 Latency(us) 00:08:20.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.255 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:20.255 Nvme0n1 : 1.03 5949.00 23.24 0.00 0.00 21456.82 8469.27 93968.54 00:08:20.255 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:20.255 Nvme1n1p1 : 1.03 6007.44 23.47 0.00 0.00 21213.72 13308.85 67350.84 00:08:20.255 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:20.255 Nvme1n1p2 : 1.03 5984.63 23.38 0.00 0.00 21181.94 15627.82 73803.62 00:08:20.255 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:20.255 Nvme2n1 : 1.04 5991.87 23.41 0.00 0.00 21080.94 15627.82 61301.37 00:08:20.255 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:20.255 Nvme2n2 : 1.04 5984.53 23.38 0.00 0.00 21038.23 15022.87 62511.26 00:08:20.255 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:20.255 Nvme2n3 : 1.04 5977.22 23.35 0.00 0.00 21010.13 12804.73 64124.46 00:08:20.255 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:20.255 Nvme3n1 : 1.04 5908.49 23.08 0.00 0.00 21185.49 12250.19 64527.75 00:08:20.255 =================================================================================================================== 00:08:20.255 Total : 41803.18 163.29 0.00 0.00 21166.26 8469.27 93968.54 00:08:20.255 00:08:20.255 real 0m2.176s 00:08:20.255 user 0m1.777s 00:08:20.255 sys 0m0.276s 00:08:20.255 21:50:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.255 ************************************ 00:08:20.255 END TEST bdev_write_zeroes 00:08:20.255 ************************************ 00:08:20.255 21:50:05 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:20.518 21:50:05 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.518 21:50:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:20.518 21:50:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.518 21:50:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:20.518 ************************************ 00:08:20.518 START TEST bdev_json_nonenclosed 00:08:20.518 ************************************ 00:08:20.518 21:50:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:20.518 [2024-09-30 21:50:05.207135] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:20.518 [2024-09-30 21:50:05.207343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75406 ] 00:08:20.778 [2024-09-30 21:50:05.344869] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.778 [2024-09-30 21:50:05.363634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.778 [2024-09-30 21:50:05.443817] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.778 [2024-09-30 21:50:05.443984] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:20.778 [2024-09-30 21:50:05.444008] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:20.778 [2024-09-30 21:50:05.444021] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.039 00:08:21.039 real 0m0.475s 00:08:21.039 user 0m0.225s 00:08:21.039 sys 0m0.144s 00:08:21.039 21:50:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.039 ************************************ 00:08:21.039 END TEST bdev_json_nonenclosed 00:08:21.039 ************************************ 00:08:21.039 21:50:05 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:21.039 21:50:05 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:21.039 21:50:05 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:08:21.039 21:50:05 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.039 21:50:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.039 ************************************ 00:08:21.039 START TEST bdev_json_nonarray 00:08:21.039 ************************************ 00:08:21.039 21:50:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:21.039 [2024-09-30 21:50:05.752203] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:21.039 [2024-09-30 21:50:05.752347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75432 ] 00:08:21.300 [2024-09-30 21:50:05.888425] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.300 [2024-09-30 21:50:05.909472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.300 [2024-09-30 21:50:05.986520] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.300 [2024-09-30 21:50:05.986691] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:21.300 [2024-09-30 21:50:05.986715] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:21.300 [2024-09-30 21:50:05.986732] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:21.559 ************************************ 00:08:21.559 END TEST bdev_json_nonarray 00:08:21.559 ************************************ 00:08:21.559 00:08:21.559 real 0m0.467s 00:08:21.559 user 0m0.221s 00:08:21.559 sys 0m0.137s 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:21.559 21:50:06 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:21.559 21:50:06 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:21.559 21:50:06 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:21.559 21:50:06 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.559 21:50:06 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.559 21:50:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:21.559 ************************************ 00:08:21.559 START TEST bdev_gpt_uuid 00:08:21.559 ************************************ 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=75457 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 75457 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 75457 ']' 00:08:21.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:21.559 21:50:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:21.559 [2024-09-30 21:50:06.313912] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:21.559 [2024-09-30 21:50:06.314078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75457 ] 00:08:21.819 [2024-09-30 21:50:06.448603] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.819 [2024-09-30 21:50:06.469920] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.819 [2024-09-30 21:50:06.543101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.389 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.389 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:08:22.389 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:22.389 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.389 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.984 Some configs were skipped because the RPC state that can call them passed over. 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:22.984 { 00:08:22.984 "name": "Nvme1n1p1", 00:08:22.984 "aliases": [ 00:08:22.984 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:22.984 ], 00:08:22.984 "product_name": "GPT Disk", 00:08:22.984 "block_size": 4096, 00:08:22.984 "num_blocks": 655104, 00:08:22.984 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:22.984 "assigned_rate_limits": { 00:08:22.984 "rw_ios_per_sec": 0, 00:08:22.984 "rw_mbytes_per_sec": 0, 00:08:22.984 "r_mbytes_per_sec": 0, 00:08:22.984 "w_mbytes_per_sec": 0 00:08:22.984 }, 00:08:22.984 "claimed": false, 00:08:22.984 "zoned": false, 00:08:22.984 "supported_io_types": { 00:08:22.984 "read": true, 00:08:22.984 "write": true, 00:08:22.984 "unmap": true, 00:08:22.984 "flush": true, 00:08:22.984 "reset": true, 00:08:22.984 "nvme_admin": false, 00:08:22.984 "nvme_io": false, 00:08:22.984 "nvme_io_md": false, 00:08:22.984 "write_zeroes": true, 00:08:22.984 "zcopy": false, 00:08:22.984 "get_zone_info": false, 00:08:22.984 "zone_management": false, 00:08:22.984 "zone_append": false, 00:08:22.984 "compare": true, 00:08:22.984 "compare_and_write": false, 00:08:22.984 "abort": true, 00:08:22.984 "seek_hole": false, 00:08:22.984 "seek_data": false, 00:08:22.984 "copy": true, 00:08:22.984 "nvme_iov_md": false 00:08:22.984 }, 00:08:22.984 "driver_specific": { 00:08:22.984 "gpt": { 00:08:22.984 "base_bdev": "Nvme1n1", 00:08:22.984 "offset_blocks": 256, 00:08:22.984 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:22.984 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:22.984 "partition_name": "SPDK_TEST_first" 00:08:22.984 } 00:08:22.984 } 00:08:22.984 } 00:08:22.984 ]' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:22.984 { 00:08:22.984 "name": "Nvme1n1p2", 00:08:22.984 "aliases": [ 00:08:22.984 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:22.984 ], 00:08:22.984 "product_name": "GPT Disk", 00:08:22.984 "block_size": 4096, 00:08:22.984 "num_blocks": 655103, 00:08:22.984 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:22.984 "assigned_rate_limits": { 00:08:22.984 "rw_ios_per_sec": 0, 00:08:22.984 "rw_mbytes_per_sec": 0, 00:08:22.984 "r_mbytes_per_sec": 0, 00:08:22.984 "w_mbytes_per_sec": 0 00:08:22.984 }, 00:08:22.984 "claimed": false, 00:08:22.984 "zoned": false, 00:08:22.984 "supported_io_types": { 00:08:22.984 "read": true, 00:08:22.984 "write": true, 00:08:22.984 "unmap": true, 00:08:22.984 "flush": true, 00:08:22.984 "reset": true, 00:08:22.984 "nvme_admin": false, 00:08:22.984 "nvme_io": false, 00:08:22.984 "nvme_io_md": false, 00:08:22.984 "write_zeroes": true, 00:08:22.984 "zcopy": false, 00:08:22.984 "get_zone_info": false, 00:08:22.984 "zone_management": false, 00:08:22.984 "zone_append": false, 00:08:22.984 "compare": true, 00:08:22.984 "compare_and_write": false, 00:08:22.984 "abort": true, 00:08:22.984 "seek_hole": false, 00:08:22.984 "seek_data": false, 00:08:22.984 "copy": true, 00:08:22.984 "nvme_iov_md": false 00:08:22.984 }, 00:08:22.984 "driver_specific": { 00:08:22.984 "gpt": { 00:08:22.984 "base_bdev": "Nvme1n1", 00:08:22.984 "offset_blocks": 655360, 00:08:22.984 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:22.984 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:22.984 "partition_name": "SPDK_TEST_second" 00:08:22.984 } 00:08:22.984 } 00:08:22.984 } 00:08:22.984 ]' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 75457 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 75457 ']' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 75457 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75457 00:08:22.984 killing process with pid 75457 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75457' 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 75457 00:08:22.984 21:50:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 75457 00:08:23.556 00:08:23.556 real 0m1.968s 00:08:23.556 user 0m2.016s 00:08:23.556 sys 0m0.504s 00:08:23.556 21:50:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:23.556 21:50:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:23.556 ************************************ 00:08:23.556 END TEST bdev_gpt_uuid 00:08:23.556 ************************************ 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:23.556 21:50:08 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:23.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:24.077 Waiting for block devices as requested 00:08:24.077 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:24.077 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:24.336 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:24.336 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:29.622 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:29.622 21:50:14 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:29.622 21:50:14 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:29.884 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:29.884 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:29.884 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:29.884 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:29.884 21:50:14 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:29.884 00:08:29.884 real 0m52.597s 00:08:29.884 user 1m4.170s 00:08:29.884 sys 0m9.209s 00:08:29.884 21:50:14 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.884 ************************************ 00:08:29.884 END TEST blockdev_nvme_gpt 00:08:29.884 ************************************ 00:08:29.884 21:50:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:29.884 21:50:14 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:29.884 21:50:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.884 21:50:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.884 21:50:14 -- common/autotest_common.sh@10 -- # set +x 00:08:29.884 ************************************ 00:08:29.884 START TEST nvme 00:08:29.884 ************************************ 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:29.884 * Looking for test storage... 00:08:29.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:29.884 21:50:14 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.884 21:50:14 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.884 21:50:14 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.884 21:50:14 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.884 21:50:14 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.884 21:50:14 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.884 21:50:14 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.884 21:50:14 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.884 21:50:14 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.884 21:50:14 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.884 21:50:14 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.884 21:50:14 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:29.884 21:50:14 nvme -- scripts/common.sh@345 -- # : 1 00:08:29.884 21:50:14 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.884 21:50:14 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.884 21:50:14 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:29.884 21:50:14 nvme -- scripts/common.sh@353 -- # local d=1 00:08:29.884 21:50:14 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.884 21:50:14 nvme -- scripts/common.sh@355 -- # echo 1 00:08:29.884 21:50:14 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.884 21:50:14 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:29.884 21:50:14 nvme -- scripts/common.sh@353 -- # local d=2 00:08:29.884 21:50:14 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.884 21:50:14 nvme -- scripts/common.sh@355 -- # echo 2 00:08:29.884 21:50:14 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.884 21:50:14 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.884 21:50:14 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.884 21:50:14 nvme -- scripts/common.sh@368 -- # return 0 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:29.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.884 --rc genhtml_branch_coverage=1 00:08:29.884 --rc genhtml_function_coverage=1 00:08:29.884 --rc genhtml_legend=1 00:08:29.884 --rc geninfo_all_blocks=1 00:08:29.884 --rc geninfo_unexecuted_blocks=1 00:08:29.884 00:08:29.884 ' 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:29.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.884 --rc genhtml_branch_coverage=1 00:08:29.884 --rc genhtml_function_coverage=1 00:08:29.884 --rc genhtml_legend=1 00:08:29.884 --rc geninfo_all_blocks=1 00:08:29.884 --rc geninfo_unexecuted_blocks=1 00:08:29.884 00:08:29.884 ' 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:29.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.884 --rc genhtml_branch_coverage=1 00:08:29.884 --rc genhtml_function_coverage=1 00:08:29.884 --rc genhtml_legend=1 00:08:29.884 --rc geninfo_all_blocks=1 00:08:29.884 --rc geninfo_unexecuted_blocks=1 00:08:29.884 00:08:29.884 ' 00:08:29.884 21:50:14 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:29.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.884 --rc genhtml_branch_coverage=1 00:08:29.884 --rc genhtml_function_coverage=1 00:08:29.884 --rc genhtml_legend=1 00:08:29.884 --rc geninfo_all_blocks=1 00:08:29.884 --rc geninfo_unexecuted_blocks=1 00:08:29.884 00:08:29.884 ' 00:08:29.884 21:50:14 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:30.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:31.028 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:31.028 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:31.028 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:31.028 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:31.291 21:50:15 nvme -- nvme/nvme.sh@79 -- # uname 00:08:31.291 21:50:15 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:31.291 21:50:15 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:31.291 21:50:15 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:31.291 Waiting for stub to ready for secondary processes... 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1071 -- # stubpid=76088 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/76088 ]] 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:31.291 21:50:15 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:31.291 [2024-09-30 21:50:15.934388] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:08:31.291 [2024-09-30 21:50:15.934545] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:32.234 21:50:16 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:32.234 21:50:16 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/76088 ]] 00:08:32.234 21:50:16 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:32.493 [2024-09-30 21:50:17.177021] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.493 [2024-09-30 21:50:17.198842] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.493 [2024-09-30 21:50:17.233902] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.493 [2024-09-30 21:50:17.234666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.493 [2024-09-30 21:50:17.234760] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.493 [2024-09-30 21:50:17.248784] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:32.493 [2024-09-30 21:50:17.248868] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:32.493 [2024-09-30 21:50:17.261534] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:32.493 [2024-09-30 21:50:17.261807] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:32.493 [2024-09-30 21:50:17.263552] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:32.493 [2024-09-30 21:50:17.263915] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:32.493 [2024-09-30 21:50:17.263992] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:32.493 [2024-09-30 21:50:17.265728] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:32.493 [2024-09-30 21:50:17.266049] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:32.493 [2024-09-30 21:50:17.266178] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:32.493 [2024-09-30 21:50:17.267723] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:32.493 [2024-09-30 21:50:17.268058] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:32.493 [2024-09-30 21:50:17.268152] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:32.493 [2024-09-30 21:50:17.268263] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:32.493 [2024-09-30 21:50:17.268363] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:33.434 done. 00:08:33.434 21:50:17 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:33.434 21:50:17 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:33.434 21:50:17 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:33.434 21:50:17 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:33.434 21:50:17 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.434 21:50:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.434 ************************************ 00:08:33.434 START TEST nvme_reset 00:08:33.434 ************************************ 00:08:33.434 21:50:17 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:33.434 Initializing NVMe Controllers 00:08:33.434 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:33.434 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:33.434 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:33.434 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:33.434 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:33.434 00:08:33.434 real 0m0.238s 00:08:33.434 user 0m0.067s 00:08:33.434 sys 0m0.115s 00:08:33.434 21:50:18 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:33.434 ************************************ 00:08:33.434 END TEST nvme_reset 00:08:33.434 ************************************ 00:08:33.434 21:50:18 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:33.434 21:50:18 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:33.434 21:50:18 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:33.434 21:50:18 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:33.434 21:50:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.434 ************************************ 00:08:33.434 START TEST nvme_identify 00:08:33.434 ************************************ 00:08:33.434 21:50:18 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:33.434 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:33.434 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:33.434 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:33.434 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:33.434 21:50:18 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:33.434 21:50:18 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:33.434 21:50:18 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:33.434 21:50:18 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:33.434 21:50:18 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:33.699 21:50:18 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:33.699 21:50:18 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:33.699 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:33.699 [2024-09-30 21:50:18.470026] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 76116 terminated unexpected 00:08:33.699 ===================================================== 00:08:33.699 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.699 ===================================================== 00:08:33.699 Controller Capabilities/Features 00:08:33.699 ================================ 00:08:33.699 Vendor ID: 1b36 00:08:33.699 Subsystem Vendor ID: 1af4 00:08:33.699 Serial Number: 12341 00:08:33.699 Model Number: QEMU NVMe Ctrl 00:08:33.699 Firmware Version: 8.0.0 00:08:33.699 Recommended Arb Burst: 6 00:08:33.699 IEEE OUI Identifier: 00 54 52 00:08:33.699 Multi-path I/O 00:08:33.699 May have multiple subsystem ports: No 00:08:33.699 May have multiple controllers: No 00:08:33.699 Associated with SR-IOV VF: No 00:08:33.699 Max Data Transfer Size: 524288 00:08:33.699 Max Number of Namespaces: 256 00:08:33.699 Max Number of I/O Queues: 64 00:08:33.699 NVMe Specification Version (VS): 1.4 00:08:33.699 NVMe Specification Version (Identify): 1.4 00:08:33.699 Maximum Queue Entries: 2048 00:08:33.699 Contiguous Queues Required: Yes 00:08:33.699 Arbitration Mechanisms Supported 00:08:33.699 Weighted Round Robin: Not Supported 00:08:33.699 Vendor Specific: Not Supported 00:08:33.699 Reset Timeout: 7500 ms 00:08:33.699 Doorbell Stride: 4 bytes 00:08:33.699 NVM Subsystem Reset: Not Supported 00:08:33.699 Command Sets Supported 00:08:33.699 NVM Command Set: Supported 00:08:33.699 Boot Partition: Not Supported 00:08:33.699 Memory Page Size Minimum: 4096 bytes 00:08:33.699 Memory Page Size Maximum: 65536 bytes 00:08:33.699 Persistent Memory Region: Not Supported 00:08:33.699 Optional Asynchronous Events Supported 00:08:33.699 Namespace Attribute Notices: Supported 00:08:33.699 Firmware Activation Notices: Not Supported 00:08:33.699 ANA Change Notices: Not Supported 00:08:33.699 PLE Aggregate Log Change Notices: Not Supported 00:08:33.699 LBA Status Info Alert Notices: Not Supported 00:08:33.699 EGE Aggregate Log Change Notices: Not Supported 00:08:33.699 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.699 Zone Descriptor Change Notices: Not Supported 00:08:33.699 Discovery Log Change Notices: Not Supported 00:08:33.699 Controller Attributes 00:08:33.699 128-bit Host Identifier: Not Supported 00:08:33.699 Non-Operational Permissive Mode: Not Supported 00:08:33.699 NVM Sets: Not Supported 00:08:33.699 Read Recovery Levels: Not Supported 00:08:33.699 Endurance Groups: Not Supported 00:08:33.699 Predictable Latency Mode: Not Supported 00:08:33.699 Traffic Based Keep ALive: Not Supported 00:08:33.699 Namespace Granularity: Not Supported 00:08:33.699 SQ Associations: Not Supported 00:08:33.699 UUID List: Not Supported 00:08:33.699 Multi-Domain Subsystem: Not Supported 00:08:33.699 Fixed Capacity Management: Not Supported 00:08:33.699 Variable Capacity Management: Not Supported 00:08:33.699 Delete Endurance Group: Not Supported 00:08:33.699 Delete NVM Set: Not Supported 00:08:33.699 Extended LBA Formats Supported: Supported 00:08:33.699 Flexible Data Placement Supported: Not Supported 00:08:33.699 00:08:33.699 Controller Memory Buffer Support 00:08:33.699 ================================ 00:08:33.699 Supported: No 00:08:33.699 00:08:33.699 Persistent Memory Region Support 00:08:33.699 ================================ 00:08:33.699 Supported: No 00:08:33.699 00:08:33.699 Admin Command Set Attributes 00:08:33.699 ============================ 00:08:33.699 Security Send/Receive: Not Supported 00:08:33.699 Format NVM: Supported 00:08:33.699 Firmware Activate/Download: Not Supported 00:08:33.699 Namespace Management: Supported 00:08:33.699 Device Self-Test: Not Supported 00:08:33.699 Directives: Supported 00:08:33.699 NVMe-MI: Not Supported 00:08:33.699 Virtualization Management: Not Supported 00:08:33.699 Doorbell Buffer Config: Supported 00:08:33.699 Get LBA Status Capability: Not Supported 00:08:33.699 Command & Feature Lockdown Capability: Not Supported 00:08:33.699 Abort Command Limit: 4 00:08:33.699 Async Event Request Limit: 4 00:08:33.699 Number of Firmware Slots: N/A 00:08:33.699 Firmware Slot 1 Read-Only: N/A 00:08:33.699 Firmware Activation Without Reset: N/A 00:08:33.699 Multiple Update Detection Support: N/A 00:08:33.699 Firmware Update Granularity: No Information Provided 00:08:33.699 Per-Namespace SMART Log: Yes 00:08:33.699 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.699 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:33.699 Command Effects Log Page: Supported 00:08:33.699 Get Log Page Extended Data: Supported 00:08:33.699 Telemetry Log Pages: Not Supported 00:08:33.699 Persistent Event Log Pages: Not Supported 00:08:33.699 Supported Log Pages Log Page: May Support 00:08:33.699 Commands Supported & Effects Log Page: Not Supported 00:08:33.699 Feature Identifiers & Effects Log Page:May Support 00:08:33.699 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.699 Data Area 4 for Telemetry Log: Not Supported 00:08:33.699 Error Log Page Entries Supported: 1 00:08:33.699 Keep Alive: Not Supported 00:08:33.699 00:08:33.699 NVM Command Set Attributes 00:08:33.699 ========================== 00:08:33.699 Submission Queue Entry Size 00:08:33.699 Max: 64 00:08:33.699 Min: 64 00:08:33.699 Completion Queue Entry Size 00:08:33.699 Max: 16 00:08:33.699 Min: 16 00:08:33.699 Number of Namespaces: 256 00:08:33.699 Compare Command: Supported 00:08:33.699 Write Uncorrectable Command: Not Supported 00:08:33.699 Dataset Management Command: Supported 00:08:33.699 Write Zeroes Command: Supported 00:08:33.699 Set Features Save Field: Supported 00:08:33.699 Reservations: Not Supported 00:08:33.699 Timestamp: Supported 00:08:33.699 Copy: Supported 00:08:33.699 Volatile Write Cache: Present 00:08:33.699 Atomic Write Unit (Normal): 1 00:08:33.699 Atomic Write Unit (PFail): 1 00:08:33.699 Atomic Compare & Write Unit: 1 00:08:33.699 Fused Compare & Write: Not Supported 00:08:33.699 Scatter-Gather List 00:08:33.699 SGL Command Set: Supported 00:08:33.699 SGL Keyed: Not Supported 00:08:33.699 SGL Bit Bucket Descriptor: Not Supported 00:08:33.699 SGL Metadata Pointer: Not Supported 00:08:33.699 Oversized SGL: Not Supported 00:08:33.699 SGL Metadata Address: Not Supported 00:08:33.699 SGL Offset: Not Supported 00:08:33.699 Transport SGL Data Block: Not Supported 00:08:33.699 Replay Protected Memory Block: Not Supported 00:08:33.699 00:08:33.699 Firmware Slot Information 00:08:33.699 ========================= 00:08:33.699 Active slot: 1 00:08:33.699 Slot 1 Firmware Revision: 1.0 00:08:33.699 00:08:33.699 00:08:33.699 Commands Supported and Effects 00:08:33.699 ============================== 00:08:33.699 Admin Commands 00:08:33.699 -------------- 00:08:33.699 Delete I/O Submission Queue (00h): Supported 00:08:33.699 Create I/O Submission Queue (01h): Supported 00:08:33.699 Get Log Page (02h): Supported 00:08:33.700 Delete I/O Completion Queue (04h): Supported 00:08:33.700 Create I/O Completion Queue (05h): Supported 00:08:33.700 Identify (06h): Supported 00:08:33.700 Abort (08h): Supported 00:08:33.700 Set Features (09h): Supported 00:08:33.700 Get Features (0Ah): Supported 00:08:33.700 Asynchronous Event Request (0Ch): Supported 00:08:33.700 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.700 Directive Send (19h): Supported 00:08:33.700 Directive Receive (1Ah): Supported 00:08:33.700 Virtualization Management (1Ch): Supported 00:08:33.700 Doorbell Buffer Config (7Ch): Supported 00:08:33.700 Format NVM (80h): Supported LBA-Change 00:08:33.700 I/O Commands 00:08:33.700 ------------ 00:08:33.700 Flush (00h): Supported LBA-Change 00:08:33.700 Write (01h): Supported LBA-Change 00:08:33.700 Read (02h): Supported 00:08:33.700 Compare (05h): Supported 00:08:33.700 Write Zeroes (08h): Supported LBA-Change 00:08:33.700 Dataset Management (09h): Supported LBA-Change 00:08:33.700 Unknown (0Ch): Supported 00:08:33.700 Unknown (12h): Supported 00:08:33.700 Copy (19h): Supported LBA-Change 00:08:33.700 Unknown (1Dh): Supported LBA-Change 00:08:33.700 00:08:33.700 Error Log 00:08:33.700 ========= 00:08:33.700 00:08:33.700 Arbitration 00:08:33.700 =========== 00:08:33.700 Arbitration Burst: no limit 00:08:33.700 00:08:33.700 Power Management 00:08:33.700 ================ 00:08:33.700 Number of Power States: 1 00:08:33.700 Current Power State: Power State #0 00:08:33.700 Power State #0: 00:08:33.700 Max Power: 25.00 W 00:08:33.700 Non-Operational State: Operational 00:08:33.700 Entry Latency: 16 microseconds 00:08:33.700 Exit Latency: 4 microseconds 00:08:33.700 Relative Read Throughput: 0 00:08:33.700 Relative Read Latency: 0 00:08:33.700 Relative Write Throughput: 0 00:08:33.700 Relative Write Latency: 0 00:08:33.700 Idle Power[2024-09-30 21:50:18.472136] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 76116 terminated unexpected 00:08:33.700 : Not Reported 00:08:33.700 Active Power: Not Reported 00:08:33.700 Non-Operational Permissive Mode: Not Supported 00:08:33.700 00:08:33.700 Health Information 00:08:33.700 ================== 00:08:33.700 Critical Warnings: 00:08:33.700 Available Spare Space: OK 00:08:33.700 Temperature: OK 00:08:33.700 Device Reliability: OK 00:08:33.700 Read Only: No 00:08:33.700 Volatile Memory Backup: OK 00:08:33.700 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.700 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.700 Available Spare: 0% 00:08:33.700 Available Spare Threshold: 0% 00:08:33.700 Life Percentage Used: 0% 00:08:33.700 Data Units Read: 985 00:08:33.700 Data Units Written: 858 00:08:33.700 Host Read Commands: 46995 00:08:33.700 Host Write Commands: 45889 00:08:33.700 Controller Busy Time: 0 minutes 00:08:33.700 Power Cycles: 0 00:08:33.700 Power On Hours: 0 hours 00:08:33.700 Unsafe Shutdowns: 0 00:08:33.700 Unrecoverable Media Errors: 0 00:08:33.700 Lifetime Error Log Entries: 0 00:08:33.700 Warning Temperature Time: 0 minutes 00:08:33.700 Critical Temperature Time: 0 minutes 00:08:33.700 00:08:33.700 Number of Queues 00:08:33.700 ================ 00:08:33.700 Number of I/O Submission Queues: 64 00:08:33.700 Number of I/O Completion Queues: 64 00:08:33.700 00:08:33.700 ZNS Specific Controller Data 00:08:33.700 ============================ 00:08:33.700 Zone Append Size Limit: 0 00:08:33.700 00:08:33.700 00:08:33.700 Active Namespaces 00:08:33.700 ================= 00:08:33.700 Namespace ID:1 00:08:33.700 Error Recovery Timeout: Unlimited 00:08:33.700 Command Set Identifier: NVM (00h) 00:08:33.700 Deallocate: Supported 00:08:33.700 Deallocated/Unwritten Error: Supported 00:08:33.700 Deallocated Read Value: All 0x00 00:08:33.700 Deallocate in Write Zeroes: Not Supported 00:08:33.700 Deallocated Guard Field: 0xFFFF 00:08:33.700 Flush: Supported 00:08:33.700 Reservation: Not Supported 00:08:33.700 Namespace Sharing Capabilities: Private 00:08:33.700 Size (in LBAs): 1310720 (5GiB) 00:08:33.700 Capacity (in LBAs): 1310720 (5GiB) 00:08:33.700 Utilization (in LBAs): 1310720 (5GiB) 00:08:33.700 Thin Provisioning: Not Supported 00:08:33.700 Per-NS Atomic Units: No 00:08:33.700 Maximum Single Source Range Length: 128 00:08:33.700 Maximum Copy Length: 128 00:08:33.700 Maximum Source Range Count: 128 00:08:33.700 NGUID/EUI64 Never Reused: No 00:08:33.700 Namespace Write Protected: No 00:08:33.700 Number of LBA Formats: 8 00:08:33.700 Current LBA Format: LBA Format #04 00:08:33.700 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.700 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.700 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.700 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.700 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.700 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.700 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.700 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.700 00:08:33.700 NVM Specific Namespace Data 00:08:33.700 =========================== 00:08:33.700 Logical Block Storage Tag Mask: 0 00:08:33.700 Protection Information Capabilities: 00:08:33.700 16b Guard Protection Information Storage Tag Support: No 00:08:33.700 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.700 Storage Tag Check Read Support: No 00:08:33.700 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.700 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.700 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.700 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.700 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.700 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.700 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.700 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.700 ===================================================== 00:08:33.700 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.700 ===================================================== 00:08:33.700 Controller Capabilities/Features 00:08:33.700 ================================ 00:08:33.700 Vendor ID: 1b36 00:08:33.700 Subsystem Vendor ID: 1af4 00:08:33.700 Serial Number: 12343 00:08:33.700 Model Number: QEMU NVMe Ctrl 00:08:33.700 Firmware Version: 8.0.0 00:08:33.700 Recommended Arb Burst: 6 00:08:33.700 IEEE OUI Identifier: 00 54 52 00:08:33.700 Multi-path I/O 00:08:33.700 May have multiple subsystem ports: No 00:08:33.700 May have multiple controllers: Yes 00:08:33.700 Associated with SR-IOV VF: No 00:08:33.700 Max Data Transfer Size: 524288 00:08:33.700 Max Number of Namespaces: 256 00:08:33.700 Max Number of I/O Queues: 64 00:08:33.700 NVMe Specification Version (VS): 1.4 00:08:33.700 NVMe Specification Version (Identify): 1.4 00:08:33.700 Maximum Queue Entries: 2048 00:08:33.700 Contiguous Queues Required: Yes 00:08:33.700 Arbitration Mechanisms Supported 00:08:33.700 Weighted Round Robin: Not Supported 00:08:33.700 Vendor Specific: Not Supported 00:08:33.700 Reset Timeout: 7500 ms 00:08:33.700 Doorbell Stride: 4 bytes 00:08:33.700 NVM Subsystem Reset: Not Supported 00:08:33.700 Command Sets Supported 00:08:33.700 NVM Command Set: Supported 00:08:33.700 Boot Partition: Not Supported 00:08:33.700 Memory Page Size Minimum: 4096 bytes 00:08:33.700 Memory Page Size Maximum: 65536 bytes 00:08:33.700 Persistent Memory Region: Not Supported 00:08:33.700 Optional Asynchronous Events Supported 00:08:33.700 Namespace Attribute Notices: Supported 00:08:33.700 Firmware Activation Notices: Not Supported 00:08:33.700 ANA Change Notices: Not Supported 00:08:33.700 PLE Aggregate Log Change Notices: Not Supported 00:08:33.700 LBA Status Info Alert Notices: Not Supported 00:08:33.700 EGE Aggregate Log Change Notices: Not Supported 00:08:33.700 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.700 Zone Descriptor Change Notices: Not Supported 00:08:33.700 Discovery Log Change Notices: Not Supported 00:08:33.700 Controller Attributes 00:08:33.700 128-bit Host Identifier: Not Supported 00:08:33.700 Non-Operational Permissive Mode: Not Supported 00:08:33.700 NVM Sets: Not Supported 00:08:33.700 Read Recovery Levels: Not Supported 00:08:33.700 Endurance Groups: Supported 00:08:33.700 Predictable Latency Mode: Not Supported 00:08:33.700 Traffic Based Keep ALive: Not Supported 00:08:33.700 Namespace Granularity: Not Supported 00:08:33.700 SQ Associations: Not Supported 00:08:33.700 UUID List: Not Supported 00:08:33.700 Multi-Domain Subsystem: Not Supported 00:08:33.700 Fixed Capacity Management: Not Supported 00:08:33.700 Variable Capacity Management: Not Supported 00:08:33.700 Delete Endurance Group: Not Supported 00:08:33.700 Delete NVM Set: Not Supported 00:08:33.700 Extended LBA Formats Supported: Supported 00:08:33.700 Flexible Data Placement Supported: Supported 00:08:33.700 00:08:33.700 Controller Memory Buffer Support 00:08:33.700 ================================ 00:08:33.701 Supported: No 00:08:33.701 00:08:33.701 Persistent Memory Region Support 00:08:33.701 ================================ 00:08:33.701 Supported: No 00:08:33.701 00:08:33.701 Admin Command Set Attributes 00:08:33.701 ============================ 00:08:33.701 Security Send/Receive: Not Supported 00:08:33.701 Format NVM: Supported 00:08:33.701 Firmware Activate/Download: Not Supported 00:08:33.701 Namespace Management: Supported 00:08:33.701 Device Self-Test: Not Supported 00:08:33.701 Directives: Supported 00:08:33.701 NVMe-MI: Not Supported 00:08:33.701 Virtualization Management: Not Supported 00:08:33.701 Doorbell Buffer Config: Supported 00:08:33.701 Get LBA Status Capability: Not Supported 00:08:33.701 Command & Feature Lockdown Capability: Not Supported 00:08:33.701 Abort Command Limit: 4 00:08:33.701 Async Event Request Limit: 4 00:08:33.701 Number of Firmware Slots: N/A 00:08:33.701 Firmware Slot 1 Read-Only: N/A 00:08:33.701 Firmware Activation Without Reset: N/A 00:08:33.701 Multiple Update Detection Support: N/A 00:08:33.701 Firmware Update Granularity: No Information Provided 00:08:33.701 Per-Namespace SMART Log: Yes 00:08:33.701 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.701 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:33.701 Command Effects Log Page: Supported 00:08:33.701 Get Log Page Extended Data: Supported 00:08:33.701 Telemetry Log Pages: Not Supported 00:08:33.701 Persistent Event Log Pages: Not Supported 00:08:33.701 Supported Log Pages Log Page: May Support 00:08:33.701 Commands Supported & Effects Log Page: Not Supported 00:08:33.701 Feature Identifiers & Effects Log Page:May Support 00:08:33.701 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.701 Data Area 4 for Telemetry Log: Not Supported 00:08:33.701 Error Log Page Entries Supported: 1 00:08:33.701 Keep Alive: Not Supported 00:08:33.701 00:08:33.701 NVM Command Set Attributes 00:08:33.701 ========================== 00:08:33.701 Submission Queue Entry Size 00:08:33.701 Max: 64 00:08:33.701 Min: 64 00:08:33.701 Completion Queue Entry Size 00:08:33.701 Max: 16 00:08:33.701 Min: 16 00:08:33.701 Number of Namespaces: 256 00:08:33.701 Compare Command: Supported 00:08:33.701 Write Uncorrectable Command: Not Supported 00:08:33.701 Dataset Management Command: Supported 00:08:33.701 Write Zeroes Command: Supported 00:08:33.701 Set Features Save Field: Supported 00:08:33.701 Reservations: Not Supported 00:08:33.701 Timestamp: Supported 00:08:33.701 Copy: Supported 00:08:33.701 Volatile Write Cache: Present 00:08:33.701 Atomic Write Unit (Normal): 1 00:08:33.701 Atomic Write Unit (PFail): 1 00:08:33.701 Atomic Compare & Write Unit: 1 00:08:33.701 Fused Compare & Write: Not Supported 00:08:33.701 Scatter-Gather List 00:08:33.701 SGL Command Set: Supported 00:08:33.701 SGL Keyed: Not Supported 00:08:33.701 SGL Bit Bucket Descriptor: Not Supported 00:08:33.701 SGL Metadata Pointer: Not Supported 00:08:33.701 Oversized SGL: Not Supported 00:08:33.701 SGL Metadata Address: Not Supported 00:08:33.701 SGL Offset: Not Supported 00:08:33.701 Transport SGL Data Block: Not Supported 00:08:33.701 Replay Protected Memory Block: Not Supported 00:08:33.701 00:08:33.701 Firmware Slot Information 00:08:33.701 ========================= 00:08:33.701 Active slot: 1 00:08:33.701 Slot 1 Firmware Revision: 1.0 00:08:33.701 00:08:33.701 00:08:33.701 Commands Supported and Effects 00:08:33.701 ============================== 00:08:33.701 Admin Commands 00:08:33.701 -------------- 00:08:33.701 Delete I/O Submission Queue (00h): Supported 00:08:33.701 Create I/O Submission Queue (01h): Supported 00:08:33.701 Get Log Page (02h): Supported 00:08:33.701 Delete I/O Completion Queue (04h): Supported 00:08:33.701 Create I/O Completion Queue (05h): Supported 00:08:33.701 Identify (06h): Supported 00:08:33.701 Abort (08h): Supported 00:08:33.701 Set Features (09h): Supported 00:08:33.701 Get Features (0Ah): Supported 00:08:33.701 Asynchronous Event Request (0Ch): Supported 00:08:33.701 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.701 Directive Send (19h): Supported 00:08:33.701 Directive Receive (1Ah): Supported 00:08:33.701 Virtualization Management (1Ch): Supported 00:08:33.701 Doorbell Buffer Config (7Ch): Supported 00:08:33.701 Format NVM (80h): Supported LBA-Change 00:08:33.701 I/O Commands 00:08:33.701 ------------ 00:08:33.701 Flush (00h): Supported LBA-Change 00:08:33.701 Write (01h): Supported LBA-Change 00:08:33.701 Read (02h): Supported 00:08:33.701 Compare (05h): Supported 00:08:33.701 Write Zeroes (08h): Supported LBA-Change 00:08:33.701 Dataset Management (09h): Supported LBA-Change 00:08:33.701 Unknown (0Ch): Supported 00:08:33.701 Unknown (12h): Supported 00:08:33.701 Copy (19h): Supported LBA-Change 00:08:33.701 Unknown (1Dh): Supported LBA-Change 00:08:33.701 00:08:33.701 Error Log 00:08:33.701 ========= 00:08:33.701 00:08:33.701 Arbitration 00:08:33.701 =========== 00:08:33.701 Arbitration Burst: no limit 00:08:33.701 00:08:33.701 Power Management 00:08:33.701 ================ 00:08:33.701 Number of Power States: 1 00:08:33.701 Current Power State: Power State #0 00:08:33.701 Power State #0: 00:08:33.701 Max Power: 25.00 W 00:08:33.701 Non-Operational State: Operational 00:08:33.701 Entry Latency: 16 microseconds 00:08:33.701 Exit Latency: 4 microseconds 00:08:33.701 Relative Read Throughput: 0 00:08:33.701 Relative Read Latency: 0 00:08:33.701 Relative Write Throughput: 0 00:08:33.701 Relative Write Latency: 0 00:08:33.701 Idle Power: Not Reported 00:08:33.701 Active Power: Not Reported 00:08:33.701 Non-Operational Permissive Mode: Not Supported 00:08:33.701 00:08:33.701 Health Information 00:08:33.701 ================== 00:08:33.701 Critical Warnings: 00:08:33.701 Available Spare Space: OK 00:08:33.701 Temperature: OK 00:08:33.701 Device Reliability: OK 00:08:33.701 Read Only: No 00:08:33.701 Volatile Memory Backup: OK 00:08:33.701 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.701 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.701 Available Spare: 0% 00:08:33.701 Available Spare Threshold: 0% 00:08:33.701 Life Percentage Used: 0% 00:08:33.701 Data Units Read: 761 00:08:33.701 Data Units Written: 690 00:08:33.701 Host Read Commands: 32909 00:08:33.701 Host Write Commands: 32332 00:08:33.701 Controller Busy Time: 0 minutes 00:08:33.701 Power Cycles: 0 00:08:33.701 Power On Hours: 0 hours 00:08:33.701 Unsafe Shutdowns: 0 00:08:33.701 Unrecoverable Media Errors: 0 00:08:33.701 Lifetime Error Log Entries: 0 00:08:33.701 Warning Temperature Time: 0 minutes 00:08:33.701 Critical Temperature Time: 0 minutes 00:08:33.701 00:08:33.701 Number of Queues 00:08:33.701 ================ 00:08:33.701 Number of I/O Submission Queues: 64 00:08:33.701 Number of I/O Completion Queues: 64 00:08:33.701 00:08:33.701 ZNS Specific Controller Data 00:08:33.701 ============================ 00:08:33.701 Zone Append Size Limit: 0 00:08:33.701 00:08:33.701 00:08:33.701 Active Namespaces 00:08:33.701 ================= 00:08:33.701 Namespace ID:1 00:08:33.701 Error Recovery Timeout: Unlimited 00:08:33.701 Command Set Identifier: NVM (00h) 00:08:33.701 Deallocate: Supported 00:08:33.701 Deallocated/Unwritten Error: Supported 00:08:33.701 Deallocated Read Value: All 0x00 00:08:33.701 Deallocate in Write Zeroes: Not Supported 00:08:33.701 Deallocated Guard Field: 0xFFFF 00:08:33.701 Flush: Supported 00:08:33.701 Reservation: Not Supported 00:08:33.701 Namespace Sharing Capabilities: Multiple Controllers 00:08:33.701 Size (in LBAs): 262144 (1GiB) 00:08:33.701 Capacity (in LBAs): 262144 (1GiB) 00:08:33.701 Utilization (in LBAs): 262144 (1GiB) 00:08:33.701 Thin Provisioning: Not Supported 00:08:33.701 Per-NS Atomic Units: No 00:08:33.701 Maximum Single Source Range Length: 128 00:08:33.701 Maximum Copy Length: 128 00:08:33.701 Maximum Source Range Count: 128 00:08:33.701 NGUID/EUI64 Never Reused: No 00:08:33.701 Namespace Write Protected: No 00:08:33.701 Endurance group ID: 1 00:08:33.701 Number of LBA Formats: 8 00:08:33.701 Current LBA Format: LBA Format #04 00:08:33.701 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.701 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.701 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.701 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.701 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.701 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.701 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.702 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.702 00:08:33.702 Get Feature FDP: 00:08:33.702 ================ 00:08:33.702 Enabled: Yes 00:08:33.702 FDP configuration index: 0 00:08:33.702 00:08:33.702 FDP configurations log page 00:08:33.702 =========================== 00:08:33.702 Number of FDP configurations: 1 00:08:33.702 Version: 0 00:08:33.702 Size: 112 00:08:33.702 FDP Configuration Descriptor: 0 00:08:33.702 Descriptor Size: 96 00:08:33.702 Reclaim Group Identifier format: 2 00:08:33.702 FDP Volatile Write Cache: Not Present 00:08:33.702 FDP Configuration: Valid 00:08:33.702 Vendor Specific Size: 0 00:08:33.702 Number of Reclaim Groups: 2 00:08:33.702 Number of Recalim Unit Handles: 8 00:08:33.702 Max Placement Identifiers: 128 00:08:33.702 Number of Namespaces Suppprted: 256 00:08:33.702 Reclaim unit Nominal Size: 6000000 bytes 00:08:33.702 Estimated Reclaim Unit Time Limit: Not Reported 00:08:33.702 RUH Desc #000: RUH Type: Initially Isolated 00:08:33.702 RUH Desc #001: RUH Type: Initially Isolated 00:08:33.702 RUH Desc #002: RUH Type: Initially Isolated 00:08:33.702 RUH Desc #003: RUH Type: Initially Isolated 00:08:33.702 RUH Desc #004: RUH Type: Initially Isolated 00:08:33.702 RUH Desc #005: RUH Type: Initially Isolated 00:08:33.702 RUH Desc #006: RUH Type: Initially Isolated 00:08:33.702 RUH Desc #007: RUH Type: Initially Isolated 00:08:33.702 00:08:33.702 FDP reclaim unit handle usage log page 00:08:33.702 ====================================== 00:08:33.702 Number of Reclaim Unit Handles: 8 00:08:33.702 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:33.702 RUH Usage Desc #001: RUH Attributes: Unused 00:08:33.702 RUH Usage Desc #002: RUH Attributes: Unused 00:08:33.702 RUH Usage Desc #003: RUH Attributes: Unused 00:08:33.702 RUH Usage Desc #004: RUH Attributes: Unused 00:08:33.702 RU[2024-09-30 21:50:18.474482] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 76116 terminated unexpected 00:08:33.702 H Usage Desc #005: RUH Attributes: Unused 00:08:33.702 RUH Usage Desc #006: RUH Attributes: Unused 00:08:33.702 RUH Usage Desc #007: RUH Attributes: Unused 00:08:33.702 00:08:33.702 FDP statistics log page 00:08:33.702 ======================= 00:08:33.702 Host bytes with metadata written: 426770432 00:08:33.702 Media bytes with metadata written: 426856448 00:08:33.702 Media bytes erased: 0 00:08:33.702 00:08:33.702 FDP events log page 00:08:33.702 =================== 00:08:33.702 Number of FDP events: 0 00:08:33.702 00:08:33.702 NVM Specific Namespace Data 00:08:33.702 =========================== 00:08:33.702 Logical Block Storage Tag Mask: 0 00:08:33.702 Protection Information Capabilities: 00:08:33.702 16b Guard Protection Information Storage Tag Support: No 00:08:33.702 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.702 Storage Tag Check Read Support: No 00:08:33.702 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.702 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.702 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.702 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.702 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.702 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.702 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.702 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.702 ===================================================== 00:08:33.702 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.702 ===================================================== 00:08:33.702 Controller Capabilities/Features 00:08:33.702 ================================ 00:08:33.702 Vendor ID: 1b36 00:08:33.702 Subsystem Vendor ID: 1af4 00:08:33.702 Serial Number: 12340 00:08:33.702 Model Number: QEMU NVMe Ctrl 00:08:33.702 Firmware Version: 8.0.0 00:08:33.702 Recommended Arb Burst: 6 00:08:33.702 IEEE OUI Identifier: 00 54 52 00:08:33.702 Multi-path I/O 00:08:33.702 May have multiple subsystem ports: No 00:08:33.702 May have multiple controllers: No 00:08:33.702 Associated with SR-IOV VF: No 00:08:33.702 Max Data Transfer Size: 524288 00:08:33.702 Max Number of Namespaces: 256 00:08:33.702 Max Number of I/O Queues: 64 00:08:33.702 NVMe Specification Version (VS): 1.4 00:08:33.702 NVMe Specification Version (Identify): 1.4 00:08:33.702 Maximum Queue Entries: 2048 00:08:33.702 Contiguous Queues Required: Yes 00:08:33.702 Arbitration Mechanisms Supported 00:08:33.702 Weighted Round Robin: Not Supported 00:08:33.702 Vendor Specific: Not Supported 00:08:33.702 Reset Timeout: 7500 ms 00:08:33.702 Doorbell Stride: 4 bytes 00:08:33.702 NVM Subsystem Reset: Not Supported 00:08:33.702 Command Sets Supported 00:08:33.702 NVM Command Set: Supported 00:08:33.702 Boot Partition: Not Supported 00:08:33.702 Memory Page Size Minimum: 4096 bytes 00:08:33.702 Memory Page Size Maximum: 65536 bytes 00:08:33.702 Persistent Memory Region: Not Supported 00:08:33.702 Optional Asynchronous Events Supported 00:08:33.702 Namespace Attribute Notices: Supported 00:08:33.702 Firmware Activation Notices: Not Supported 00:08:33.702 ANA Change Notices: Not Supported 00:08:33.702 PLE Aggregate Log Change Notices: Not Supported 00:08:33.702 LBA Status Info Alert Notices: Not Supported 00:08:33.702 EGE Aggregate Log Change Notices: Not Supported 00:08:33.702 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.702 Zone Descriptor Change Notices: Not Supported 00:08:33.702 Discovery Log Change Notices: Not Supported 00:08:33.702 Controller Attributes 00:08:33.702 128-bit Host Identifier: Not Supported 00:08:33.702 Non-Operational Permissive Mode: Not Supported 00:08:33.702 NVM Sets: Not Supported 00:08:33.702 Read Recovery Levels: Not Supported 00:08:33.702 Endurance Groups: Not Supported 00:08:33.702 Predictable Latency Mode: Not Supported 00:08:33.702 Traffic Based Keep ALive: Not Supported 00:08:33.702 Namespace Granularity: Not Supported 00:08:33.702 SQ Associations: Not Supported 00:08:33.702 UUID List: Not Supported 00:08:33.702 Multi-Domain Subsystem: Not Supported 00:08:33.702 Fixed Capacity Management: Not Supported 00:08:33.702 Variable Capacity Management: Not Supported 00:08:33.702 Delete Endurance Group: Not Supported 00:08:33.702 Delete NVM Set: Not Supported 00:08:33.702 Extended LBA Formats Supported: Supported 00:08:33.702 Flexible Data Placement Supported: Not Supported 00:08:33.702 00:08:33.702 Controller Memory Buffer Support 00:08:33.702 ================================ 00:08:33.702 Supported: No 00:08:33.702 00:08:33.702 Persistent Memory Region Support 00:08:33.702 ================================ 00:08:33.702 Supported: No 00:08:33.702 00:08:33.702 Admin Command Set Attributes 00:08:33.702 ============================ 00:08:33.702 Security Send/Receive: Not Supported 00:08:33.702 Format NVM: Supported 00:08:33.702 Firmware Activate/Download: Not Supported 00:08:33.702 Namespace Management: Supported 00:08:33.702 Device Self-Test: Not Supported 00:08:33.702 Directives: Supported 00:08:33.702 NVMe-MI: Not Supported 00:08:33.702 Virtualization Management: Not Supported 00:08:33.702 Doorbell Buffer Config: Supported 00:08:33.702 Get LBA Status Capability: Not Supported 00:08:33.702 Command & Feature Lockdown Capability: Not Supported 00:08:33.702 Abort Command Limit: 4 00:08:33.702 Async Event Request Limit: 4 00:08:33.702 Number of Firmware Slots: N/A 00:08:33.702 Firmware Slot 1 Read-Only: N/A 00:08:33.702 Firmware Activation Without Reset: N/A 00:08:33.702 Multiple Update Detection Support: N/A 00:08:33.702 Firmware Update Granularity: No Information Provided 00:08:33.702 Per-Namespace SMART Log: Yes 00:08:33.702 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.702 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:33.702 Command Effects Log Page: Supported 00:08:33.702 Get Log Page Extended Data: Supported 00:08:33.702 Telemetry Log Pages: Not Supported 00:08:33.702 Persistent Event Log Pages: Not Supported 00:08:33.702 Supported Log Pages Log Page: May Support 00:08:33.702 Commands Supported & Effects Log Page: Not Supported 00:08:33.702 Feature Identifiers & Effects Log Page:May Support 00:08:33.702 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.702 Data Area 4 for Telemetry Log: Not Supported 00:08:33.702 Error Log Page Entries Supported: 1 00:08:33.702 Keep Alive: Not Supported 00:08:33.702 00:08:33.702 NVM Command Set Attributes 00:08:33.702 ========================== 00:08:33.702 Submission Queue Entry Size 00:08:33.702 Max: 64 00:08:33.702 Min: 64 00:08:33.702 Completion Queue Entry Size 00:08:33.702 Max: 16 00:08:33.702 Min: 16 00:08:33.702 Number of Namespaces: 256 00:08:33.702 Compare Command: Supported 00:08:33.702 Write Uncorrectable Command: Not Supported 00:08:33.702 Dataset Management Command: Supported 00:08:33.702 Write Zeroes Command: Supported 00:08:33.702 Set Features Save Field: Supported 00:08:33.702 Reservations: Not Supported 00:08:33.703 Timestamp: Supported 00:08:33.703 Copy: Supported 00:08:33.703 Volatile Write Cache: Present 00:08:33.703 Atomic Write Unit (Normal): 1 00:08:33.703 Atomic Write Unit (PFail): 1 00:08:33.703 Atomic Compare & Write Unit: 1 00:08:33.703 Fused Compare & Write: Not Supported 00:08:33.703 Scatter-Gather List 00:08:33.703 SGL Command Set: Supported 00:08:33.703 SGL Keyed: Not Supported 00:08:33.703 SGL Bit Bucket Descriptor: Not Supported 00:08:33.703 SGL Metadata Pointer: Not Supported 00:08:33.703 Oversized SGL: Not Supported 00:08:33.703 SGL Metadata Address: Not Supported 00:08:33.703 SGL Offset: Not Supported 00:08:33.703 Transport SGL Data Block: Not Supported 00:08:33.703 Replay Protected Memory Block: Not Supported 00:08:33.703 00:08:33.703 Firmware Slot Information 00:08:33.703 ========================= 00:08:33.703 Active slot: 1 00:08:33.703 Slot 1 Firmware Revision: 1.0 00:08:33.703 00:08:33.703 00:08:33.703 Commands Supported and Effects 00:08:33.703 ============================== 00:08:33.703 Admin Commands 00:08:33.703 -------------- 00:08:33.703 Delete I/O Submission Queue (00h): Supported 00:08:33.703 Create I/O Submission Queue (01h): Supported 00:08:33.703 Get Log Page (02h): Supported 00:08:33.703 Delete I/O Completion Queue (04h): Supported 00:08:33.703 Create I/O Completion Queue (05h): Supported 00:08:33.703 Identify (06h): Supported 00:08:33.703 Abort (08h): Supported 00:08:33.703 Set Features (09h): Supported 00:08:33.703 Get Features (0Ah): Supported 00:08:33.703 Asynchronous Event Request (0Ch): Supported 00:08:33.703 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.703 Directive Send (19h): Supported 00:08:33.703 Directive Receive (1Ah): Supported 00:08:33.703 Virtualization Management (1Ch): Supported 00:08:33.703 Doorbell Buffer Config (7Ch): Supported 00:08:33.703 Format NVM (80h): Supported LBA-Change 00:08:33.703 I/O Commands 00:08:33.703 ------------ 00:08:33.703 Flush (00h): Supported LBA-Change 00:08:33.703 Write (01h): Supported LBA-Change 00:08:33.703 Read (02h): Supported 00:08:33.703 Compare (05h): Supported 00:08:33.703 Write Zeroes (08h): Supported LBA-Change 00:08:33.703 Dataset Management (09h): Supported LBA-Change 00:08:33.703 Unknown (0Ch): Supported 00:08:33.703 Unknown (12h): Supported 00:08:33.703 Copy (19h): Supported LBA-Change 00:08:33.703 Unknown (1Dh): Supported LBA-Change 00:08:33.703 00:08:33.703 Error Log 00:08:33.703 ========= 00:08:33.703 00:08:33.703 Arbitration 00:08:33.703 =========== 00:08:33.703 Arbitration Burst: no limit 00:08:33.703 00:08:33.703 Power Management 00:08:33.703 ================ 00:08:33.703 Number of Power States: 1 00:08:33.703 Current Power State: Power State #0 00:08:33.703 Power State #0: 00:08:33.703 Max Power: 25.00 W 00:08:33.703 Non-Operational State: Operational 00:08:33.703 Entry Latency: 16 microseconds 00:08:33.703 Exit Latency: 4 microseconds 00:08:33.703 Relative Read Throughput: 0 00:08:33.703 Relative Read Latency: 0 00:08:33.703 Relative Write Throughput: 0 00:08:33.703 Relative Write Latency: 0 00:08:33.703 Idle Power: Not Reported 00:08:33.703 Active Power: Not Reported 00:08:33.703 Non-Operational Permissive Mode: Not Supported 00:08:33.703 00:08:33.703 Health Information 00:08:33.703 ================== 00:08:33.703 Critical Warnings: 00:08:33.703 Available Spare Space: OK 00:08:33.703 Temperature: OK 00:08:33.703 Device Reliability: OK 00:08:33.703 Read Only: No 00:08:33.703 Volatile Memory Backup: OK 00:08:33.703 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.703 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.703 Available Spare: 0% 00:08:33.703 Available Spare Threshold: 0% 00:08:33.703 Life Percentage Used: 0% 00:08:33.703 Data Units Read: 638 00:08:33.703 Data Units Written: 566 00:08:33.703 Host Read Commands: 31584 00:08:33.703 Host Write Commands: 31370 00:08:33.703 Controller Busy Time: 0 minutes 00:08:33.703 Power Cycles: 0 00:08:33.703 Power On Hours: 0 hours 00:08:33.703 Unsafe Shutdowns: 0 00:08:33.703 Unrecoverable Media Errors: 0 00:08:33.703 Lifetime Error Log Entries: 0 00:08:33.703 Warning Temperature Time: 0 minutes 00:08:33.703 Critical Temperature Time: 0 minutes 00:08:33.703 00:08:33.703 Number of Queues 00:08:33.703 ================ 00:08:33.703 Number of I/O Submission Queues: 64 00:08:33.703 Number of I/O Completion Queues: 64 00:08:33.703 00:08:33.703 ZNS Specific Controller Data 00:08:33.703 ============================ 00:08:33.703 Zone Append Size Limit: 0 00:08:33.703 00:08:33.703 00:08:33.703 Active Namespaces 00:08:33.703 ================= 00:08:33.703 Namespace ID:1 00:08:33.703 Error Recovery Timeout: Unlimited 00:08:33.703 Command Set Identifier: NVM (00h) 00:08:33.703 Deallocate: Supported 00:08:33.703 Deallocated/Unwritten Error: Supported 00:08:33.703 Deallocated Read Value: All 0x00 00:08:33.703 Deallocate in Write Zeroes: Not Supported 00:08:33.703 Deallocated Guard Field: 0xFFFF 00:08:33.703 Flush: Supported 00:08:33.703 Reservation: Not Supported 00:08:33.703 Metadata Transferred as: Separate Metadata Buffer 00:08:33.703 Namespace Sharing Capabilities: Private 00:08:33.703 Size (in LBAs): 1548666 (5GiB) 00:08:33.703 Capacity (in LBAs): 1548666 (5GiB) 00:08:33.703 Utilization (in LBAs): 1548666 (5GiB) 00:08:33.703 Thin Provisioning: Not Supported 00:08:33.703 Per-NS Atomic Units: No 00:08:33.703 Maximum Single Source Range Length: 128 00:08:33.703 Maximum Copy Length: 128 00:08:33.703 Maximum Source Range Count: 128 00:08:33.703 NGUID/EUI64 Never Reused: No 00:08:33.703 Namespace Write Protected: No 00:08:33.703 Number of LBA Formats: 8 00:08:33.703 Current LBA Format: LBA Format #07 00:08:33.703 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.703 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.703 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.703 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.703 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.703 LBA Format[2024-09-30 21:50:18.475270] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 76116 terminated unexpected 00:08:33.703 #05: Data Size: 4096 Metadata Size: 8 00:08:33.703 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.703 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.703 00:08:33.703 NVM Specific Namespace Data 00:08:33.703 =========================== 00:08:33.703 Logical Block Storage Tag Mask: 0 00:08:33.703 Protection Information Capabilities: 00:08:33.703 16b Guard Protection Information Storage Tag Support: No 00:08:33.703 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.703 Storage Tag Check Read Support: No 00:08:33.703 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.703 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.703 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.703 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.703 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.703 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.703 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.703 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.703 ===================================================== 00:08:33.703 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.703 ===================================================== 00:08:33.703 Controller Capabilities/Features 00:08:33.703 ================================ 00:08:33.703 Vendor ID: 1b36 00:08:33.703 Subsystem Vendor ID: 1af4 00:08:33.703 Serial Number: 12342 00:08:33.703 Model Number: QEMU NVMe Ctrl 00:08:33.703 Firmware Version: 8.0.0 00:08:33.703 Recommended Arb Burst: 6 00:08:33.703 IEEE OUI Identifier: 00 54 52 00:08:33.703 Multi-path I/O 00:08:33.703 May have multiple subsystem ports: No 00:08:33.703 May have multiple controllers: No 00:08:33.703 Associated with SR-IOV VF: No 00:08:33.703 Max Data Transfer Size: 524288 00:08:33.703 Max Number of Namespaces: 256 00:08:33.703 Max Number of I/O Queues: 64 00:08:33.703 NVMe Specification Version (VS): 1.4 00:08:33.703 NVMe Specification Version (Identify): 1.4 00:08:33.703 Maximum Queue Entries: 2048 00:08:33.703 Contiguous Queues Required: Yes 00:08:33.704 Arbitration Mechanisms Supported 00:08:33.704 Weighted Round Robin: Not Supported 00:08:33.704 Vendor Specific: Not Supported 00:08:33.704 Reset Timeout: 7500 ms 00:08:33.704 Doorbell Stride: 4 bytes 00:08:33.704 NVM Subsystem Reset: Not Supported 00:08:33.704 Command Sets Supported 00:08:33.704 NVM Command Set: Supported 00:08:33.704 Boot Partition: Not Supported 00:08:33.704 Memory Page Size Minimum: 4096 bytes 00:08:33.704 Memory Page Size Maximum: 65536 bytes 00:08:33.704 Persistent Memory Region: Not Supported 00:08:33.704 Optional Asynchronous Events Supported 00:08:33.704 Namespace Attribute Notices: Supported 00:08:33.704 Firmware Activation Notices: Not Supported 00:08:33.704 ANA Change Notices: Not Supported 00:08:33.704 PLE Aggregate Log Change Notices: Not Supported 00:08:33.704 LBA Status Info Alert Notices: Not Supported 00:08:33.704 EGE Aggregate Log Change Notices: Not Supported 00:08:33.704 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.704 Zone Descriptor Change Notices: Not Supported 00:08:33.704 Discovery Log Change Notices: Not Supported 00:08:33.704 Controller Attributes 00:08:33.704 128-bit Host Identifier: Not Supported 00:08:33.704 Non-Operational Permissive Mode: Not Supported 00:08:33.704 NVM Sets: Not Supported 00:08:33.704 Read Recovery Levels: Not Supported 00:08:33.704 Endurance Groups: Not Supported 00:08:33.704 Predictable Latency Mode: Not Supported 00:08:33.704 Traffic Based Keep ALive: Not Supported 00:08:33.704 Namespace Granularity: Not Supported 00:08:33.704 SQ Associations: Not Supported 00:08:33.704 UUID List: Not Supported 00:08:33.704 Multi-Domain Subsystem: Not Supported 00:08:33.704 Fixed Capacity Management: Not Supported 00:08:33.704 Variable Capacity Management: Not Supported 00:08:33.704 Delete Endurance Group: Not Supported 00:08:33.704 Delete NVM Set: Not Supported 00:08:33.704 Extended LBA Formats Supported: Supported 00:08:33.704 Flexible Data Placement Supported: Not Supported 00:08:33.704 00:08:33.704 Controller Memory Buffer Support 00:08:33.704 ================================ 00:08:33.704 Supported: No 00:08:33.704 00:08:33.704 Persistent Memory Region Support 00:08:33.704 ================================ 00:08:33.704 Supported: No 00:08:33.704 00:08:33.704 Admin Command Set Attributes 00:08:33.704 ============================ 00:08:33.704 Security Send/Receive: Not Supported 00:08:33.704 Format NVM: Supported 00:08:33.704 Firmware Activate/Download: Not Supported 00:08:33.704 Namespace Management: Supported 00:08:33.704 Device Self-Test: Not Supported 00:08:33.704 Directives: Supported 00:08:33.704 NVMe-MI: Not Supported 00:08:33.704 Virtualization Management: Not Supported 00:08:33.704 Doorbell Buffer Config: Supported 00:08:33.704 Get LBA Status Capability: Not Supported 00:08:33.704 Command & Feature Lockdown Capability: Not Supported 00:08:33.704 Abort Command Limit: 4 00:08:33.704 Async Event Request Limit: 4 00:08:33.704 Number of Firmware Slots: N/A 00:08:33.704 Firmware Slot 1 Read-Only: N/A 00:08:33.704 Firmware Activation Without Reset: N/A 00:08:33.704 Multiple Update Detection Support: N/A 00:08:33.704 Firmware Update Granularity: No Information Provided 00:08:33.704 Per-Namespace SMART Log: Yes 00:08:33.704 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.704 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:33.704 Command Effects Log Page: Supported 00:08:33.704 Get Log Page Extended Data: Supported 00:08:33.704 Telemetry Log Pages: Not Supported 00:08:33.704 Persistent Event Log Pages: Not Supported 00:08:33.704 Supported Log Pages Log Page: May Support 00:08:33.704 Commands Supported & Effects Log Page: Not Supported 00:08:33.704 Feature Identifiers & Effects Log Page:May Support 00:08:33.704 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.704 Data Area 4 for Telemetry Log: Not Supported 00:08:33.704 Error Log Page Entries Supported: 1 00:08:33.704 Keep Alive: Not Supported 00:08:33.704 00:08:33.704 NVM Command Set Attributes 00:08:33.704 ========================== 00:08:33.704 Submission Queue Entry Size 00:08:33.704 Max: 64 00:08:33.704 Min: 64 00:08:33.704 Completion Queue Entry Size 00:08:33.704 Max: 16 00:08:33.704 Min: 16 00:08:33.704 Number of Namespaces: 256 00:08:33.704 Compare Command: Supported 00:08:33.704 Write Uncorrectable Command: Not Supported 00:08:33.704 Dataset Management Command: Supported 00:08:33.704 Write Zeroes Command: Supported 00:08:33.704 Set Features Save Field: Supported 00:08:33.704 Reservations: Not Supported 00:08:33.704 Timestamp: Supported 00:08:33.704 Copy: Supported 00:08:33.704 Volatile Write Cache: Present 00:08:33.704 Atomic Write Unit (Normal): 1 00:08:33.704 Atomic Write Unit (PFail): 1 00:08:33.704 Atomic Compare & Write Unit: 1 00:08:33.704 Fused Compare & Write: Not Supported 00:08:33.704 Scatter-Gather List 00:08:33.704 SGL Command Set: Supported 00:08:33.704 SGL Keyed: Not Supported 00:08:33.704 SGL Bit Bucket Descriptor: Not Supported 00:08:33.704 SGL Metadata Pointer: Not Supported 00:08:33.704 Oversized SGL: Not Supported 00:08:33.704 SGL Metadata Address: Not Supported 00:08:33.704 SGL Offset: Not Supported 00:08:33.704 Transport SGL Data Block: Not Supported 00:08:33.704 Replay Protected Memory Block: Not Supported 00:08:33.704 00:08:33.704 Firmware Slot Information 00:08:33.704 ========================= 00:08:33.704 Active slot: 1 00:08:33.704 Slot 1 Firmware Revision: 1.0 00:08:33.704 00:08:33.704 00:08:33.704 Commands Supported and Effects 00:08:33.704 ============================== 00:08:33.704 Admin Commands 00:08:33.704 -------------- 00:08:33.704 Delete I/O Submission Queue (00h): Supported 00:08:33.704 Create I/O Submission Queue (01h): Supported 00:08:33.704 Get Log Page (02h): Supported 00:08:33.704 Delete I/O Completion Queue (04h): Supported 00:08:33.704 Create I/O Completion Queue (05h): Supported 00:08:33.704 Identify (06h): Supported 00:08:33.704 Abort (08h): Supported 00:08:33.704 Set Features (09h): Supported 00:08:33.704 Get Features (0Ah): Supported 00:08:33.704 Asynchronous Event Request (0Ch): Supported 00:08:33.704 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.704 Directive Send (19h): Supported 00:08:33.704 Directive Receive (1Ah): Supported 00:08:33.704 Virtualization Management (1Ch): Supported 00:08:33.704 Doorbell Buffer Config (7Ch): Supported 00:08:33.704 Format NVM (80h): Supported LBA-Change 00:08:33.704 I/O Commands 00:08:33.704 ------------ 00:08:33.704 Flush (00h): Supported LBA-Change 00:08:33.704 Write (01h): Supported LBA-Change 00:08:33.704 Read (02h): Supported 00:08:33.704 Compare (05h): Supported 00:08:33.704 Write Zeroes (08h): Supported LBA-Change 00:08:33.704 Dataset Management (09h): Supported LBA-Change 00:08:33.704 Unknown (0Ch): Supported 00:08:33.704 Unknown (12h): Supported 00:08:33.704 Copy (19h): Supported LBA-Change 00:08:33.704 Unknown (1Dh): Supported LBA-Change 00:08:33.704 00:08:33.704 Error Log 00:08:33.704 ========= 00:08:33.704 00:08:33.704 Arbitration 00:08:33.704 =========== 00:08:33.704 Arbitration Burst: no limit 00:08:33.704 00:08:33.704 Power Management 00:08:33.704 ================ 00:08:33.704 Number of Power States: 1 00:08:33.704 Current Power State: Power State #0 00:08:33.704 Power State #0: 00:08:33.704 Max Power: 25.00 W 00:08:33.704 Non-Operational State: Operational 00:08:33.704 Entry Latency: 16 microseconds 00:08:33.704 Exit Latency: 4 microseconds 00:08:33.704 Relative Read Throughput: 0 00:08:33.704 Relative Read Latency: 0 00:08:33.704 Relative Write Throughput: 0 00:08:33.704 Relative Write Latency: 0 00:08:33.704 Idle Power: Not Reported 00:08:33.704 Active Power: Not Reported 00:08:33.704 Non-Operational Permissive Mode: Not Supported 00:08:33.704 00:08:33.704 Health Information 00:08:33.704 ================== 00:08:33.704 Critical Warnings: 00:08:33.704 Available Spare Space: OK 00:08:33.704 Temperature: OK 00:08:33.704 Device Reliability: OK 00:08:33.704 Read Only: No 00:08:33.704 Volatile Memory Backup: OK 00:08:33.704 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.704 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.704 Available Spare: 0% 00:08:33.704 Available Spare Threshold: 0% 00:08:33.704 Life Percentage Used: 0% 00:08:33.704 Data Units Read: 2049 00:08:33.705 Data Units Written: 1837 00:08:33.705 Host Read Commands: 96779 00:08:33.705 Host Write Commands: 95048 00:08:33.705 Controller Busy Time: 0 minutes 00:08:33.705 Power Cycles: 0 00:08:33.705 Power On Hours: 0 hours 00:08:33.705 Unsafe Shutdowns: 0 00:08:33.705 Unrecoverable Media Errors: 0 00:08:33.705 Lifetime Error Log Entries: 0 00:08:33.705 Warning Temperature Time: 0 minutes 00:08:33.705 Critical Temperature Time: 0 minutes 00:08:33.705 00:08:33.705 Number of Queues 00:08:33.705 ================ 00:08:33.705 Number of I/O Submission Queues: 64 00:08:33.705 Number of I/O Completion Queues: 64 00:08:33.705 00:08:33.705 ZNS Specific Controller Data 00:08:33.705 ============================ 00:08:33.705 Zone Append Size Limit: 0 00:08:33.705 00:08:33.705 00:08:33.705 Active Namespaces 00:08:33.705 ================= 00:08:33.705 Namespace ID:1 00:08:33.705 Error Recovery Timeout: Unlimited 00:08:33.705 Command Set Identifier: NVM (00h) 00:08:33.705 Deallocate: Supported 00:08:33.705 Deallocated/Unwritten Error: Supported 00:08:33.705 Deallocated Read Value: All 0x00 00:08:33.705 Deallocate in Write Zeroes: Not Supported 00:08:33.705 Deallocated Guard Field: 0xFFFF 00:08:33.705 Flush: Supported 00:08:33.705 Reservation: Not Supported 00:08:33.705 Namespace Sharing Capabilities: Private 00:08:33.705 Size (in LBAs): 1048576 (4GiB) 00:08:33.705 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.705 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.705 Thin Provisioning: Not Supported 00:08:33.705 Per-NS Atomic Units: No 00:08:33.705 Maximum Single Source Range Length: 128 00:08:33.705 Maximum Copy Length: 128 00:08:33.705 Maximum Source Range Count: 128 00:08:33.705 NGUID/EUI64 Never Reused: No 00:08:33.705 Namespace Write Protected: No 00:08:33.705 Number of LBA Formats: 8 00:08:33.705 Current LBA Format: LBA Format #04 00:08:33.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.705 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.705 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.705 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.705 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.705 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.705 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.705 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.705 00:08:33.705 NVM Specific Namespace Data 00:08:33.705 =========================== 00:08:33.705 Logical Block Storage Tag Mask: 0 00:08:33.705 Protection Information Capabilities: 00:08:33.705 16b Guard Protection Information Storage Tag Support: No 00:08:33.705 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.705 Storage Tag Check Read Support: No 00:08:33.705 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Namespace ID:2 00:08:33.705 Error Recovery Timeout: Unlimited 00:08:33.705 Command Set Identifier: NVM (00h) 00:08:33.705 Deallocate: Supported 00:08:33.705 Deallocated/Unwritten Error: Supported 00:08:33.705 Deallocated Read Value: All 0x00 00:08:33.705 Deallocate in Write Zeroes: Not Supported 00:08:33.705 Deallocated Guard Field: 0xFFFF 00:08:33.705 Flush: Supported 00:08:33.705 Reservation: Not Supported 00:08:33.705 Namespace Sharing Capabilities: Private 00:08:33.705 Size (in LBAs): 1048576 (4GiB) 00:08:33.705 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.705 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.705 Thin Provisioning: Not Supported 00:08:33.705 Per-NS Atomic Units: No 00:08:33.705 Maximum Single Source Range Length: 128 00:08:33.705 Maximum Copy Length: 128 00:08:33.705 Maximum Source Range Count: 128 00:08:33.705 NGUID/EUI64 Never Reused: No 00:08:33.705 Namespace Write Protected: No 00:08:33.705 Number of LBA Formats: 8 00:08:33.705 Current LBA Format: LBA Format #04 00:08:33.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.705 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.705 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.705 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.705 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.705 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.705 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.705 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.705 00:08:33.705 NVM Specific Namespace Data 00:08:33.705 =========================== 00:08:33.705 Logical Block Storage Tag Mask: 0 00:08:33.705 Protection Information Capabilities: 00:08:33.705 16b Guard Protection Information Storage Tag Support: No 00:08:33.705 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.705 Storage Tag Check Read Support: No 00:08:33.705 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Namespace ID:3 00:08:33.705 Error Recovery Timeout: Unlimited 00:08:33.705 Command Set Identifier: NVM (00h) 00:08:33.705 Deallocate: Supported 00:08:33.705 Deallocated/Unwritten Error: Supported 00:08:33.705 Deallocated Read Value: All 0x00 00:08:33.705 Deallocate in Write Zeroes: Not Supported 00:08:33.705 Deallocated Guard Field: 0xFFFF 00:08:33.705 Flush: Supported 00:08:33.705 Reservation: Not Supported 00:08:33.705 Namespace Sharing Capabilities: Private 00:08:33.705 Size (in LBAs): 1048576 (4GiB) 00:08:33.705 Capacity (in LBAs): 1048576 (4GiB) 00:08:33.705 Utilization (in LBAs): 1048576 (4GiB) 00:08:33.705 Thin Provisioning: Not Supported 00:08:33.705 Per-NS Atomic Units: No 00:08:33.705 Maximum Single Source Range Length: 128 00:08:33.705 Maximum Copy Length: 128 00:08:33.705 Maximum Source Range Count: 128 00:08:33.705 NGUID/EUI64 Never Reused: No 00:08:33.705 Namespace Write Protected: No 00:08:33.705 Number of LBA Formats: 8 00:08:33.705 Current LBA Format: LBA Format #04 00:08:33.705 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.705 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.705 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.705 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.705 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.705 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.705 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.705 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.705 00:08:33.705 NVM Specific Namespace Data 00:08:33.705 =========================== 00:08:33.705 Logical Block Storage Tag Mask: 0 00:08:33.705 Protection Information Capabilities: 00:08:33.705 16b Guard Protection Information Storage Tag Support: No 00:08:33.705 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.705 Storage Tag Check Read Support: No 00:08:33.705 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.705 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.968 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:33.968 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:33.968 ===================================================== 00:08:33.968 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.968 ===================================================== 00:08:33.968 Controller Capabilities/Features 00:08:33.968 ================================ 00:08:33.968 Vendor ID: 1b36 00:08:33.968 Subsystem Vendor ID: 1af4 00:08:33.968 Serial Number: 12340 00:08:33.968 Model Number: QEMU NVMe Ctrl 00:08:33.968 Firmware Version: 8.0.0 00:08:33.968 Recommended Arb Burst: 6 00:08:33.968 IEEE OUI Identifier: 00 54 52 00:08:33.968 Multi-path I/O 00:08:33.969 May have multiple subsystem ports: No 00:08:33.969 May have multiple controllers: No 00:08:33.969 Associated with SR-IOV VF: No 00:08:33.969 Max Data Transfer Size: 524288 00:08:33.969 Max Number of Namespaces: 256 00:08:33.969 Max Number of I/O Queues: 64 00:08:33.969 NVMe Specification Version (VS): 1.4 00:08:33.969 NVMe Specification Version (Identify): 1.4 00:08:33.969 Maximum Queue Entries: 2048 00:08:33.969 Contiguous Queues Required: Yes 00:08:33.969 Arbitration Mechanisms Supported 00:08:33.969 Weighted Round Robin: Not Supported 00:08:33.969 Vendor Specific: Not Supported 00:08:33.969 Reset Timeout: 7500 ms 00:08:33.969 Doorbell Stride: 4 bytes 00:08:33.969 NVM Subsystem Reset: Not Supported 00:08:33.969 Command Sets Supported 00:08:33.969 NVM Command Set: Supported 00:08:33.969 Boot Partition: Not Supported 00:08:33.969 Memory Page Size Minimum: 4096 bytes 00:08:33.969 Memory Page Size Maximum: 65536 bytes 00:08:33.969 Persistent Memory Region: Not Supported 00:08:33.969 Optional Asynchronous Events Supported 00:08:33.969 Namespace Attribute Notices: Supported 00:08:33.969 Firmware Activation Notices: Not Supported 00:08:33.969 ANA Change Notices: Not Supported 00:08:33.969 PLE Aggregate Log Change Notices: Not Supported 00:08:33.969 LBA Status Info Alert Notices: Not Supported 00:08:33.969 EGE Aggregate Log Change Notices: Not Supported 00:08:33.969 Normal NVM Subsystem Shutdown event: Not Supported 00:08:33.969 Zone Descriptor Change Notices: Not Supported 00:08:33.969 Discovery Log Change Notices: Not Supported 00:08:33.969 Controller Attributes 00:08:33.969 128-bit Host Identifier: Not Supported 00:08:33.969 Non-Operational Permissive Mode: Not Supported 00:08:33.969 NVM Sets: Not Supported 00:08:33.969 Read Recovery Levels: Not Supported 00:08:33.969 Endurance Groups: Not Supported 00:08:33.969 Predictable Latency Mode: Not Supported 00:08:33.969 Traffic Based Keep ALive: Not Supported 00:08:33.969 Namespace Granularity: Not Supported 00:08:33.969 SQ Associations: Not Supported 00:08:33.969 UUID List: Not Supported 00:08:33.969 Multi-Domain Subsystem: Not Supported 00:08:33.969 Fixed Capacity Management: Not Supported 00:08:33.969 Variable Capacity Management: Not Supported 00:08:33.969 Delete Endurance Group: Not Supported 00:08:33.969 Delete NVM Set: Not Supported 00:08:33.969 Extended LBA Formats Supported: Supported 00:08:33.969 Flexible Data Placement Supported: Not Supported 00:08:33.969 00:08:33.969 Controller Memory Buffer Support 00:08:33.969 ================================ 00:08:33.969 Supported: No 00:08:33.969 00:08:33.969 Persistent Memory Region Support 00:08:33.969 ================================ 00:08:33.969 Supported: No 00:08:33.969 00:08:33.969 Admin Command Set Attributes 00:08:33.969 ============================ 00:08:33.969 Security Send/Receive: Not Supported 00:08:33.969 Format NVM: Supported 00:08:33.969 Firmware Activate/Download: Not Supported 00:08:33.969 Namespace Management: Supported 00:08:33.969 Device Self-Test: Not Supported 00:08:33.969 Directives: Supported 00:08:33.969 NVMe-MI: Not Supported 00:08:33.969 Virtualization Management: Not Supported 00:08:33.969 Doorbell Buffer Config: Supported 00:08:33.969 Get LBA Status Capability: Not Supported 00:08:33.969 Command & Feature Lockdown Capability: Not Supported 00:08:33.969 Abort Command Limit: 4 00:08:33.969 Async Event Request Limit: 4 00:08:33.969 Number of Firmware Slots: N/A 00:08:33.969 Firmware Slot 1 Read-Only: N/A 00:08:33.969 Firmware Activation Without Reset: N/A 00:08:33.969 Multiple Update Detection Support: N/A 00:08:33.969 Firmware Update Granularity: No Information Provided 00:08:33.969 Per-Namespace SMART Log: Yes 00:08:33.969 Asymmetric Namespace Access Log Page: Not Supported 00:08:33.969 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:33.969 Command Effects Log Page: Supported 00:08:33.969 Get Log Page Extended Data: Supported 00:08:33.969 Telemetry Log Pages: Not Supported 00:08:33.969 Persistent Event Log Pages: Not Supported 00:08:33.969 Supported Log Pages Log Page: May Support 00:08:33.969 Commands Supported & Effects Log Page: Not Supported 00:08:33.969 Feature Identifiers & Effects Log Page:May Support 00:08:33.969 NVMe-MI Commands & Effects Log Page: May Support 00:08:33.969 Data Area 4 for Telemetry Log: Not Supported 00:08:33.969 Error Log Page Entries Supported: 1 00:08:33.969 Keep Alive: Not Supported 00:08:33.969 00:08:33.969 NVM Command Set Attributes 00:08:33.969 ========================== 00:08:33.969 Submission Queue Entry Size 00:08:33.969 Max: 64 00:08:33.969 Min: 64 00:08:33.969 Completion Queue Entry Size 00:08:33.969 Max: 16 00:08:33.969 Min: 16 00:08:33.969 Number of Namespaces: 256 00:08:33.969 Compare Command: Supported 00:08:33.969 Write Uncorrectable Command: Not Supported 00:08:33.969 Dataset Management Command: Supported 00:08:33.969 Write Zeroes Command: Supported 00:08:33.969 Set Features Save Field: Supported 00:08:33.969 Reservations: Not Supported 00:08:33.969 Timestamp: Supported 00:08:33.969 Copy: Supported 00:08:33.969 Volatile Write Cache: Present 00:08:33.969 Atomic Write Unit (Normal): 1 00:08:33.969 Atomic Write Unit (PFail): 1 00:08:33.969 Atomic Compare & Write Unit: 1 00:08:33.969 Fused Compare & Write: Not Supported 00:08:33.969 Scatter-Gather List 00:08:33.969 SGL Command Set: Supported 00:08:33.969 SGL Keyed: Not Supported 00:08:33.969 SGL Bit Bucket Descriptor: Not Supported 00:08:33.969 SGL Metadata Pointer: Not Supported 00:08:33.969 Oversized SGL: Not Supported 00:08:33.969 SGL Metadata Address: Not Supported 00:08:33.969 SGL Offset: Not Supported 00:08:33.969 Transport SGL Data Block: Not Supported 00:08:33.969 Replay Protected Memory Block: Not Supported 00:08:33.969 00:08:33.969 Firmware Slot Information 00:08:33.969 ========================= 00:08:33.969 Active slot: 1 00:08:33.969 Slot 1 Firmware Revision: 1.0 00:08:33.969 00:08:33.969 00:08:33.969 Commands Supported and Effects 00:08:33.969 ============================== 00:08:33.969 Admin Commands 00:08:33.969 -------------- 00:08:33.969 Delete I/O Submission Queue (00h): Supported 00:08:33.969 Create I/O Submission Queue (01h): Supported 00:08:33.969 Get Log Page (02h): Supported 00:08:33.969 Delete I/O Completion Queue (04h): Supported 00:08:33.969 Create I/O Completion Queue (05h): Supported 00:08:33.969 Identify (06h): Supported 00:08:33.969 Abort (08h): Supported 00:08:33.969 Set Features (09h): Supported 00:08:33.969 Get Features (0Ah): Supported 00:08:33.969 Asynchronous Event Request (0Ch): Supported 00:08:33.969 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:33.969 Directive Send (19h): Supported 00:08:33.969 Directive Receive (1Ah): Supported 00:08:33.969 Virtualization Management (1Ch): Supported 00:08:33.969 Doorbell Buffer Config (7Ch): Supported 00:08:33.969 Format NVM (80h): Supported LBA-Change 00:08:33.969 I/O Commands 00:08:33.969 ------------ 00:08:33.969 Flush (00h): Supported LBA-Change 00:08:33.969 Write (01h): Supported LBA-Change 00:08:33.969 Read (02h): Supported 00:08:33.969 Compare (05h): Supported 00:08:33.969 Write Zeroes (08h): Supported LBA-Change 00:08:33.969 Dataset Management (09h): Supported LBA-Change 00:08:33.969 Unknown (0Ch): Supported 00:08:33.969 Unknown (12h): Supported 00:08:33.969 Copy (19h): Supported LBA-Change 00:08:33.969 Unknown (1Dh): Supported LBA-Change 00:08:33.969 00:08:33.969 Error Log 00:08:33.969 ========= 00:08:33.969 00:08:33.969 Arbitration 00:08:33.969 =========== 00:08:33.969 Arbitration Burst: no limit 00:08:33.969 00:08:33.969 Power Management 00:08:33.969 ================ 00:08:33.969 Number of Power States: 1 00:08:33.969 Current Power State: Power State #0 00:08:33.969 Power State #0: 00:08:33.969 Max Power: 25.00 W 00:08:33.969 Non-Operational State: Operational 00:08:33.969 Entry Latency: 16 microseconds 00:08:33.969 Exit Latency: 4 microseconds 00:08:33.969 Relative Read Throughput: 0 00:08:33.969 Relative Read Latency: 0 00:08:33.969 Relative Write Throughput: 0 00:08:33.969 Relative Write Latency: 0 00:08:33.969 Idle Power: Not Reported 00:08:33.969 Active Power: Not Reported 00:08:33.969 Non-Operational Permissive Mode: Not Supported 00:08:33.969 00:08:33.969 Health Information 00:08:33.969 ================== 00:08:33.969 Critical Warnings: 00:08:33.969 Available Spare Space: OK 00:08:33.969 Temperature: OK 00:08:33.969 Device Reliability: OK 00:08:33.969 Read Only: No 00:08:33.969 Volatile Memory Backup: OK 00:08:33.969 Current Temperature: 323 Kelvin (50 Celsius) 00:08:33.969 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:33.969 Available Spare: 0% 00:08:33.969 Available Spare Threshold: 0% 00:08:33.969 Life Percentage Used: 0% 00:08:33.970 Data Units Read: 638 00:08:33.970 Data Units Written: 566 00:08:33.970 Host Read Commands: 31584 00:08:33.970 Host Write Commands: 31370 00:08:33.970 Controller Busy Time: 0 minutes 00:08:33.970 Power Cycles: 0 00:08:33.970 Power On Hours: 0 hours 00:08:33.970 Unsafe Shutdowns: 0 00:08:33.970 Unrecoverable Media Errors: 0 00:08:33.970 Lifetime Error Log Entries: 0 00:08:33.970 Warning Temperature Time: 0 minutes 00:08:33.970 Critical Temperature Time: 0 minutes 00:08:33.970 00:08:33.970 Number of Queues 00:08:33.970 ================ 00:08:33.970 Number of I/O Submission Queues: 64 00:08:33.970 Number of I/O Completion Queues: 64 00:08:33.970 00:08:33.970 ZNS Specific Controller Data 00:08:33.970 ============================ 00:08:33.970 Zone Append Size Limit: 0 00:08:33.970 00:08:33.970 00:08:33.970 Active Namespaces 00:08:33.970 ================= 00:08:33.970 Namespace ID:1 00:08:33.970 Error Recovery Timeout: Unlimited 00:08:33.970 Command Set Identifier: NVM (00h) 00:08:33.970 Deallocate: Supported 00:08:33.970 Deallocated/Unwritten Error: Supported 00:08:33.970 Deallocated Read Value: All 0x00 00:08:33.970 Deallocate in Write Zeroes: Not Supported 00:08:33.970 Deallocated Guard Field: 0xFFFF 00:08:33.970 Flush: Supported 00:08:33.970 Reservation: Not Supported 00:08:33.970 Metadata Transferred as: Separate Metadata Buffer 00:08:33.970 Namespace Sharing Capabilities: Private 00:08:33.970 Size (in LBAs): 1548666 (5GiB) 00:08:33.970 Capacity (in LBAs): 1548666 (5GiB) 00:08:33.970 Utilization (in LBAs): 1548666 (5GiB) 00:08:33.970 Thin Provisioning: Not Supported 00:08:33.970 Per-NS Atomic Units: No 00:08:33.970 Maximum Single Source Range Length: 128 00:08:33.970 Maximum Copy Length: 128 00:08:33.970 Maximum Source Range Count: 128 00:08:33.970 NGUID/EUI64 Never Reused: No 00:08:33.970 Namespace Write Protected: No 00:08:33.970 Number of LBA Formats: 8 00:08:33.970 Current LBA Format: LBA Format #07 00:08:33.970 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:33.970 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:33.970 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:33.970 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:33.970 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:33.970 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:33.970 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:33.970 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:33.970 00:08:33.970 NVM Specific Namespace Data 00:08:33.970 =========================== 00:08:33.970 Logical Block Storage Tag Mask: 0 00:08:33.970 Protection Information Capabilities: 00:08:33.970 16b Guard Protection Information Storage Tag Support: No 00:08:33.970 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:33.970 Storage Tag Check Read Support: No 00:08:33.970 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.970 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.970 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.970 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.970 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.970 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.970 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.970 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:33.970 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:33.970 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:34.231 ===================================================== 00:08:34.231 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:34.231 ===================================================== 00:08:34.231 Controller Capabilities/Features 00:08:34.231 ================================ 00:08:34.231 Vendor ID: 1b36 00:08:34.231 Subsystem Vendor ID: 1af4 00:08:34.231 Serial Number: 12341 00:08:34.231 Model Number: QEMU NVMe Ctrl 00:08:34.231 Firmware Version: 8.0.0 00:08:34.231 Recommended Arb Burst: 6 00:08:34.231 IEEE OUI Identifier: 00 54 52 00:08:34.231 Multi-path I/O 00:08:34.231 May have multiple subsystem ports: No 00:08:34.231 May have multiple controllers: No 00:08:34.231 Associated with SR-IOV VF: No 00:08:34.231 Max Data Transfer Size: 524288 00:08:34.231 Max Number of Namespaces: 256 00:08:34.231 Max Number of I/O Queues: 64 00:08:34.231 NVMe Specification Version (VS): 1.4 00:08:34.231 NVMe Specification Version (Identify): 1.4 00:08:34.231 Maximum Queue Entries: 2048 00:08:34.231 Contiguous Queues Required: Yes 00:08:34.231 Arbitration Mechanisms Supported 00:08:34.231 Weighted Round Robin: Not Supported 00:08:34.231 Vendor Specific: Not Supported 00:08:34.231 Reset Timeout: 7500 ms 00:08:34.231 Doorbell Stride: 4 bytes 00:08:34.231 NVM Subsystem Reset: Not Supported 00:08:34.231 Command Sets Supported 00:08:34.231 NVM Command Set: Supported 00:08:34.231 Boot Partition: Not Supported 00:08:34.231 Memory Page Size Minimum: 4096 bytes 00:08:34.231 Memory Page Size Maximum: 65536 bytes 00:08:34.231 Persistent Memory Region: Not Supported 00:08:34.231 Optional Asynchronous Events Supported 00:08:34.231 Namespace Attribute Notices: Supported 00:08:34.231 Firmware Activation Notices: Not Supported 00:08:34.231 ANA Change Notices: Not Supported 00:08:34.231 PLE Aggregate Log Change Notices: Not Supported 00:08:34.231 LBA Status Info Alert Notices: Not Supported 00:08:34.231 EGE Aggregate Log Change Notices: Not Supported 00:08:34.231 Normal NVM Subsystem Shutdown event: Not Supported 00:08:34.231 Zone Descriptor Change Notices: Not Supported 00:08:34.231 Discovery Log Change Notices: Not Supported 00:08:34.231 Controller Attributes 00:08:34.231 128-bit Host Identifier: Not Supported 00:08:34.231 Non-Operational Permissive Mode: Not Supported 00:08:34.231 NVM Sets: Not Supported 00:08:34.231 Read Recovery Levels: Not Supported 00:08:34.231 Endurance Groups: Not Supported 00:08:34.231 Predictable Latency Mode: Not Supported 00:08:34.231 Traffic Based Keep ALive: Not Supported 00:08:34.231 Namespace Granularity: Not Supported 00:08:34.231 SQ Associations: Not Supported 00:08:34.231 UUID List: Not Supported 00:08:34.231 Multi-Domain Subsystem: Not Supported 00:08:34.231 Fixed Capacity Management: Not Supported 00:08:34.231 Variable Capacity Management: Not Supported 00:08:34.231 Delete Endurance Group: Not Supported 00:08:34.231 Delete NVM Set: Not Supported 00:08:34.231 Extended LBA Formats Supported: Supported 00:08:34.231 Flexible Data Placement Supported: Not Supported 00:08:34.231 00:08:34.231 Controller Memory Buffer Support 00:08:34.231 ================================ 00:08:34.231 Supported: No 00:08:34.231 00:08:34.231 Persistent Memory Region Support 00:08:34.231 ================================ 00:08:34.231 Supported: No 00:08:34.231 00:08:34.231 Admin Command Set Attributes 00:08:34.231 ============================ 00:08:34.231 Security Send/Receive: Not Supported 00:08:34.231 Format NVM: Supported 00:08:34.231 Firmware Activate/Download: Not Supported 00:08:34.232 Namespace Management: Supported 00:08:34.232 Device Self-Test: Not Supported 00:08:34.232 Directives: Supported 00:08:34.232 NVMe-MI: Not Supported 00:08:34.232 Virtualization Management: Not Supported 00:08:34.232 Doorbell Buffer Config: Supported 00:08:34.232 Get LBA Status Capability: Not Supported 00:08:34.232 Command & Feature Lockdown Capability: Not Supported 00:08:34.232 Abort Command Limit: 4 00:08:34.232 Async Event Request Limit: 4 00:08:34.232 Number of Firmware Slots: N/A 00:08:34.232 Firmware Slot 1 Read-Only: N/A 00:08:34.232 Firmware Activation Without Reset: N/A 00:08:34.232 Multiple Update Detection Support: N/A 00:08:34.232 Firmware Update Granularity: No Information Provided 00:08:34.232 Per-Namespace SMART Log: Yes 00:08:34.232 Asymmetric Namespace Access Log Page: Not Supported 00:08:34.232 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:34.232 Command Effects Log Page: Supported 00:08:34.232 Get Log Page Extended Data: Supported 00:08:34.232 Telemetry Log Pages: Not Supported 00:08:34.232 Persistent Event Log Pages: Not Supported 00:08:34.232 Supported Log Pages Log Page: May Support 00:08:34.232 Commands Supported & Effects Log Page: Not Supported 00:08:34.232 Feature Identifiers & Effects Log Page:May Support 00:08:34.232 NVMe-MI Commands & Effects Log Page: May Support 00:08:34.232 Data Area 4 for Telemetry Log: Not Supported 00:08:34.232 Error Log Page Entries Supported: 1 00:08:34.232 Keep Alive: Not Supported 00:08:34.232 00:08:34.232 NVM Command Set Attributes 00:08:34.232 ========================== 00:08:34.232 Submission Queue Entry Size 00:08:34.232 Max: 64 00:08:34.232 Min: 64 00:08:34.232 Completion Queue Entry Size 00:08:34.232 Max: 16 00:08:34.232 Min: 16 00:08:34.232 Number of Namespaces: 256 00:08:34.232 Compare Command: Supported 00:08:34.232 Write Uncorrectable Command: Not Supported 00:08:34.232 Dataset Management Command: Supported 00:08:34.232 Write Zeroes Command: Supported 00:08:34.232 Set Features Save Field: Supported 00:08:34.232 Reservations: Not Supported 00:08:34.232 Timestamp: Supported 00:08:34.232 Copy: Supported 00:08:34.232 Volatile Write Cache: Present 00:08:34.232 Atomic Write Unit (Normal): 1 00:08:34.232 Atomic Write Unit (PFail): 1 00:08:34.232 Atomic Compare & Write Unit: 1 00:08:34.232 Fused Compare & Write: Not Supported 00:08:34.232 Scatter-Gather List 00:08:34.232 SGL Command Set: Supported 00:08:34.232 SGL Keyed: Not Supported 00:08:34.232 SGL Bit Bucket Descriptor: Not Supported 00:08:34.232 SGL Metadata Pointer: Not Supported 00:08:34.232 Oversized SGL: Not Supported 00:08:34.232 SGL Metadata Address: Not Supported 00:08:34.232 SGL Offset: Not Supported 00:08:34.232 Transport SGL Data Block: Not Supported 00:08:34.232 Replay Protected Memory Block: Not Supported 00:08:34.232 00:08:34.232 Firmware Slot Information 00:08:34.232 ========================= 00:08:34.232 Active slot: 1 00:08:34.232 Slot 1 Firmware Revision: 1.0 00:08:34.232 00:08:34.232 00:08:34.232 Commands Supported and Effects 00:08:34.232 ============================== 00:08:34.232 Admin Commands 00:08:34.232 -------------- 00:08:34.232 Delete I/O Submission Queue (00h): Supported 00:08:34.232 Create I/O Submission Queue (01h): Supported 00:08:34.232 Get Log Page (02h): Supported 00:08:34.232 Delete I/O Completion Queue (04h): Supported 00:08:34.232 Create I/O Completion Queue (05h): Supported 00:08:34.232 Identify (06h): Supported 00:08:34.232 Abort (08h): Supported 00:08:34.232 Set Features (09h): Supported 00:08:34.232 Get Features (0Ah): Supported 00:08:34.232 Asynchronous Event Request (0Ch): Supported 00:08:34.232 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:34.232 Directive Send (19h): Supported 00:08:34.232 Directive Receive (1Ah): Supported 00:08:34.232 Virtualization Management (1Ch): Supported 00:08:34.232 Doorbell Buffer Config (7Ch): Supported 00:08:34.232 Format NVM (80h): Supported LBA-Change 00:08:34.232 I/O Commands 00:08:34.232 ------------ 00:08:34.232 Flush (00h): Supported LBA-Change 00:08:34.232 Write (01h): Supported LBA-Change 00:08:34.232 Read (02h): Supported 00:08:34.232 Compare (05h): Supported 00:08:34.232 Write Zeroes (08h): Supported LBA-Change 00:08:34.232 Dataset Management (09h): Supported LBA-Change 00:08:34.232 Unknown (0Ch): Supported 00:08:34.232 Unknown (12h): Supported 00:08:34.232 Copy (19h): Supported LBA-Change 00:08:34.232 Unknown (1Dh): Supported LBA-Change 00:08:34.232 00:08:34.232 Error Log 00:08:34.232 ========= 00:08:34.232 00:08:34.232 Arbitration 00:08:34.232 =========== 00:08:34.232 Arbitration Burst: no limit 00:08:34.232 00:08:34.232 Power Management 00:08:34.232 ================ 00:08:34.232 Number of Power States: 1 00:08:34.232 Current Power State: Power State #0 00:08:34.232 Power State #0: 00:08:34.232 Max Power: 25.00 W 00:08:34.232 Non-Operational State: Operational 00:08:34.232 Entry Latency: 16 microseconds 00:08:34.232 Exit Latency: 4 microseconds 00:08:34.232 Relative Read Throughput: 0 00:08:34.232 Relative Read Latency: 0 00:08:34.232 Relative Write Throughput: 0 00:08:34.232 Relative Write Latency: 0 00:08:34.232 Idle Power: Not Reported 00:08:34.232 Active Power: Not Reported 00:08:34.232 Non-Operational Permissive Mode: Not Supported 00:08:34.232 00:08:34.232 Health Information 00:08:34.232 ================== 00:08:34.232 Critical Warnings: 00:08:34.232 Available Spare Space: OK 00:08:34.232 Temperature: OK 00:08:34.232 Device Reliability: OK 00:08:34.232 Read Only: No 00:08:34.232 Volatile Memory Backup: OK 00:08:34.232 Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.232 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:34.232 Available Spare: 0% 00:08:34.232 Available Spare Threshold: 0% 00:08:34.232 Life Percentage Used: 0% 00:08:34.232 Data Units Read: 985 00:08:34.232 Data Units Written: 858 00:08:34.232 Host Read Commands: 46995 00:08:34.232 Host Write Commands: 45889 00:08:34.232 Controller Busy Time: 0 minutes 00:08:34.232 Power Cycles: 0 00:08:34.232 Power On Hours: 0 hours 00:08:34.232 Unsafe Shutdowns: 0 00:08:34.232 Unrecoverable Media Errors: 0 00:08:34.232 Lifetime Error Log Entries: 0 00:08:34.232 Warning Temperature Time: 0 minutes 00:08:34.232 Critical Temperature Time: 0 minutes 00:08:34.232 00:08:34.232 Number of Queues 00:08:34.232 ================ 00:08:34.232 Number of I/O Submission Queues: 64 00:08:34.232 Number of I/O Completion Queues: 64 00:08:34.232 00:08:34.232 ZNS Specific Controller Data 00:08:34.232 ============================ 00:08:34.232 Zone Append Size Limit: 0 00:08:34.232 00:08:34.232 00:08:34.232 Active Namespaces 00:08:34.232 ================= 00:08:34.232 Namespace ID:1 00:08:34.232 Error Recovery Timeout: Unlimited 00:08:34.232 Command Set Identifier: NVM (00h) 00:08:34.232 Deallocate: Supported 00:08:34.232 Deallocated/Unwritten Error: Supported 00:08:34.232 Deallocated Read Value: All 0x00 00:08:34.232 Deallocate in Write Zeroes: Not Supported 00:08:34.232 Deallocated Guard Field: 0xFFFF 00:08:34.232 Flush: Supported 00:08:34.232 Reservation: Not Supported 00:08:34.232 Namespace Sharing Capabilities: Private 00:08:34.232 Size (in LBAs): 1310720 (5GiB) 00:08:34.232 Capacity (in LBAs): 1310720 (5GiB) 00:08:34.232 Utilization (in LBAs): 1310720 (5GiB) 00:08:34.232 Thin Provisioning: Not Supported 00:08:34.232 Per-NS Atomic Units: No 00:08:34.232 Maximum Single Source Range Length: 128 00:08:34.232 Maximum Copy Length: 128 00:08:34.232 Maximum Source Range Count: 128 00:08:34.232 NGUID/EUI64 Never Reused: No 00:08:34.232 Namespace Write Protected: No 00:08:34.232 Number of LBA Formats: 8 00:08:34.232 Current LBA Format: LBA Format #04 00:08:34.232 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.232 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.232 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.232 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.232 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.232 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.232 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.232 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.232 00:08:34.232 NVM Specific Namespace Data 00:08:34.232 =========================== 00:08:34.232 Logical Block Storage Tag Mask: 0 00:08:34.232 Protection Information Capabilities: 00:08:34.232 16b Guard Protection Information Storage Tag Support: No 00:08:34.232 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.232 Storage Tag Check Read Support: No 00:08:34.232 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.232 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.233 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.233 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.233 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.233 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.233 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.233 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.233 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:34.233 21:50:18 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:34.497 ===================================================== 00:08:34.497 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:34.497 ===================================================== 00:08:34.497 Controller Capabilities/Features 00:08:34.497 ================================ 00:08:34.497 Vendor ID: 1b36 00:08:34.497 Subsystem Vendor ID: 1af4 00:08:34.497 Serial Number: 12342 00:08:34.497 Model Number: QEMU NVMe Ctrl 00:08:34.497 Firmware Version: 8.0.0 00:08:34.497 Recommended Arb Burst: 6 00:08:34.497 IEEE OUI Identifier: 00 54 52 00:08:34.497 Multi-path I/O 00:08:34.497 May have multiple subsystem ports: No 00:08:34.497 May have multiple controllers: No 00:08:34.497 Associated with SR-IOV VF: No 00:08:34.497 Max Data Transfer Size: 524288 00:08:34.497 Max Number of Namespaces: 256 00:08:34.497 Max Number of I/O Queues: 64 00:08:34.497 NVMe Specification Version (VS): 1.4 00:08:34.497 NVMe Specification Version (Identify): 1.4 00:08:34.497 Maximum Queue Entries: 2048 00:08:34.497 Contiguous Queues Required: Yes 00:08:34.497 Arbitration Mechanisms Supported 00:08:34.497 Weighted Round Robin: Not Supported 00:08:34.497 Vendor Specific: Not Supported 00:08:34.497 Reset Timeout: 7500 ms 00:08:34.497 Doorbell Stride: 4 bytes 00:08:34.497 NVM Subsystem Reset: Not Supported 00:08:34.497 Command Sets Supported 00:08:34.497 NVM Command Set: Supported 00:08:34.497 Boot Partition: Not Supported 00:08:34.497 Memory Page Size Minimum: 4096 bytes 00:08:34.497 Memory Page Size Maximum: 65536 bytes 00:08:34.497 Persistent Memory Region: Not Supported 00:08:34.497 Optional Asynchronous Events Supported 00:08:34.497 Namespace Attribute Notices: Supported 00:08:34.497 Firmware Activation Notices: Not Supported 00:08:34.497 ANA Change Notices: Not Supported 00:08:34.497 PLE Aggregate Log Change Notices: Not Supported 00:08:34.497 LBA Status Info Alert Notices: Not Supported 00:08:34.497 EGE Aggregate Log Change Notices: Not Supported 00:08:34.497 Normal NVM Subsystem Shutdown event: Not Supported 00:08:34.497 Zone Descriptor Change Notices: Not Supported 00:08:34.497 Discovery Log Change Notices: Not Supported 00:08:34.497 Controller Attributes 00:08:34.497 128-bit Host Identifier: Not Supported 00:08:34.497 Non-Operational Permissive Mode: Not Supported 00:08:34.497 NVM Sets: Not Supported 00:08:34.497 Read Recovery Levels: Not Supported 00:08:34.497 Endurance Groups: Not Supported 00:08:34.497 Predictable Latency Mode: Not Supported 00:08:34.497 Traffic Based Keep ALive: Not Supported 00:08:34.497 Namespace Granularity: Not Supported 00:08:34.497 SQ Associations: Not Supported 00:08:34.497 UUID List: Not Supported 00:08:34.497 Multi-Domain Subsystem: Not Supported 00:08:34.497 Fixed Capacity Management: Not Supported 00:08:34.497 Variable Capacity Management: Not Supported 00:08:34.497 Delete Endurance Group: Not Supported 00:08:34.497 Delete NVM Set: Not Supported 00:08:34.497 Extended LBA Formats Supported: Supported 00:08:34.497 Flexible Data Placement Supported: Not Supported 00:08:34.497 00:08:34.497 Controller Memory Buffer Support 00:08:34.497 ================================ 00:08:34.497 Supported: No 00:08:34.497 00:08:34.497 Persistent Memory Region Support 00:08:34.497 ================================ 00:08:34.497 Supported: No 00:08:34.497 00:08:34.497 Admin Command Set Attributes 00:08:34.497 ============================ 00:08:34.497 Security Send/Receive: Not Supported 00:08:34.497 Format NVM: Supported 00:08:34.497 Firmware Activate/Download: Not Supported 00:08:34.497 Namespace Management: Supported 00:08:34.497 Device Self-Test: Not Supported 00:08:34.497 Directives: Supported 00:08:34.497 NVMe-MI: Not Supported 00:08:34.497 Virtualization Management: Not Supported 00:08:34.497 Doorbell Buffer Config: Supported 00:08:34.497 Get LBA Status Capability: Not Supported 00:08:34.497 Command & Feature Lockdown Capability: Not Supported 00:08:34.497 Abort Command Limit: 4 00:08:34.497 Async Event Request Limit: 4 00:08:34.497 Number of Firmware Slots: N/A 00:08:34.497 Firmware Slot 1 Read-Only: N/A 00:08:34.497 Firmware Activation Without Reset: N/A 00:08:34.497 Multiple Update Detection Support: N/A 00:08:34.497 Firmware Update Granularity: No Information Provided 00:08:34.497 Per-Namespace SMART Log: Yes 00:08:34.497 Asymmetric Namespace Access Log Page: Not Supported 00:08:34.497 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:34.497 Command Effects Log Page: Supported 00:08:34.497 Get Log Page Extended Data: Supported 00:08:34.497 Telemetry Log Pages: Not Supported 00:08:34.497 Persistent Event Log Pages: Not Supported 00:08:34.497 Supported Log Pages Log Page: May Support 00:08:34.497 Commands Supported & Effects Log Page: Not Supported 00:08:34.497 Feature Identifiers & Effects Log Page:May Support 00:08:34.497 NVMe-MI Commands & Effects Log Page: May Support 00:08:34.497 Data Area 4 for Telemetry Log: Not Supported 00:08:34.497 Error Log Page Entries Supported: 1 00:08:34.497 Keep Alive: Not Supported 00:08:34.497 00:08:34.497 NVM Command Set Attributes 00:08:34.497 ========================== 00:08:34.497 Submission Queue Entry Size 00:08:34.497 Max: 64 00:08:34.497 Min: 64 00:08:34.497 Completion Queue Entry Size 00:08:34.497 Max: 16 00:08:34.497 Min: 16 00:08:34.497 Number of Namespaces: 256 00:08:34.497 Compare Command: Supported 00:08:34.497 Write Uncorrectable Command: Not Supported 00:08:34.497 Dataset Management Command: Supported 00:08:34.497 Write Zeroes Command: Supported 00:08:34.497 Set Features Save Field: Supported 00:08:34.497 Reservations: Not Supported 00:08:34.497 Timestamp: Supported 00:08:34.497 Copy: Supported 00:08:34.497 Volatile Write Cache: Present 00:08:34.497 Atomic Write Unit (Normal): 1 00:08:34.497 Atomic Write Unit (PFail): 1 00:08:34.497 Atomic Compare & Write Unit: 1 00:08:34.497 Fused Compare & Write: Not Supported 00:08:34.497 Scatter-Gather List 00:08:34.497 SGL Command Set: Supported 00:08:34.497 SGL Keyed: Not Supported 00:08:34.497 SGL Bit Bucket Descriptor: Not Supported 00:08:34.497 SGL Metadata Pointer: Not Supported 00:08:34.497 Oversized SGL: Not Supported 00:08:34.497 SGL Metadata Address: Not Supported 00:08:34.497 SGL Offset: Not Supported 00:08:34.498 Transport SGL Data Block: Not Supported 00:08:34.498 Replay Protected Memory Block: Not Supported 00:08:34.498 00:08:34.498 Firmware Slot Information 00:08:34.498 ========================= 00:08:34.498 Active slot: 1 00:08:34.498 Slot 1 Firmware Revision: 1.0 00:08:34.498 00:08:34.498 00:08:34.498 Commands Supported and Effects 00:08:34.498 ============================== 00:08:34.498 Admin Commands 00:08:34.498 -------------- 00:08:34.498 Delete I/O Submission Queue (00h): Supported 00:08:34.498 Create I/O Submission Queue (01h): Supported 00:08:34.498 Get Log Page (02h): Supported 00:08:34.498 Delete I/O Completion Queue (04h): Supported 00:08:34.498 Create I/O Completion Queue (05h): Supported 00:08:34.498 Identify (06h): Supported 00:08:34.498 Abort (08h): Supported 00:08:34.498 Set Features (09h): Supported 00:08:34.498 Get Features (0Ah): Supported 00:08:34.498 Asynchronous Event Request (0Ch): Supported 00:08:34.498 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:34.498 Directive Send (19h): Supported 00:08:34.498 Directive Receive (1Ah): Supported 00:08:34.498 Virtualization Management (1Ch): Supported 00:08:34.498 Doorbell Buffer Config (7Ch): Supported 00:08:34.498 Format NVM (80h): Supported LBA-Change 00:08:34.498 I/O Commands 00:08:34.498 ------------ 00:08:34.498 Flush (00h): Supported LBA-Change 00:08:34.498 Write (01h): Supported LBA-Change 00:08:34.498 Read (02h): Supported 00:08:34.498 Compare (05h): Supported 00:08:34.498 Write Zeroes (08h): Supported LBA-Change 00:08:34.498 Dataset Management (09h): Supported LBA-Change 00:08:34.498 Unknown (0Ch): Supported 00:08:34.498 Unknown (12h): Supported 00:08:34.498 Copy (19h): Supported LBA-Change 00:08:34.498 Unknown (1Dh): Supported LBA-Change 00:08:34.498 00:08:34.498 Error Log 00:08:34.498 ========= 00:08:34.498 00:08:34.498 Arbitration 00:08:34.498 =========== 00:08:34.498 Arbitration Burst: no limit 00:08:34.498 00:08:34.498 Power Management 00:08:34.498 ================ 00:08:34.498 Number of Power States: 1 00:08:34.498 Current Power State: Power State #0 00:08:34.498 Power State #0: 00:08:34.498 Max Power: 25.00 W 00:08:34.498 Non-Operational State: Operational 00:08:34.498 Entry Latency: 16 microseconds 00:08:34.498 Exit Latency: 4 microseconds 00:08:34.498 Relative Read Throughput: 0 00:08:34.498 Relative Read Latency: 0 00:08:34.498 Relative Write Throughput: 0 00:08:34.498 Relative Write Latency: 0 00:08:34.498 Idle Power: Not Reported 00:08:34.498 Active Power: Not Reported 00:08:34.498 Non-Operational Permissive Mode: Not Supported 00:08:34.498 00:08:34.498 Health Information 00:08:34.498 ================== 00:08:34.498 Critical Warnings: 00:08:34.498 Available Spare Space: OK 00:08:34.498 Temperature: OK 00:08:34.498 Device Reliability: OK 00:08:34.498 Read Only: No 00:08:34.498 Volatile Memory Backup: OK 00:08:34.498 Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.498 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:34.498 Available Spare: 0% 00:08:34.498 Available Spare Threshold: 0% 00:08:34.498 Life Percentage Used: 0% 00:08:34.498 Data Units Read: 2049 00:08:34.498 Data Units Written: 1837 00:08:34.498 Host Read Commands: 96779 00:08:34.498 Host Write Commands: 95048 00:08:34.498 Controller Busy Time: 0 minutes 00:08:34.498 Power Cycles: 0 00:08:34.498 Power On Hours: 0 hours 00:08:34.498 Unsafe Shutdowns: 0 00:08:34.498 Unrecoverable Media Errors: 0 00:08:34.498 Lifetime Error Log Entries: 0 00:08:34.498 Warning Temperature Time: 0 minutes 00:08:34.498 Critical Temperature Time: 0 minutes 00:08:34.498 00:08:34.498 Number of Queues 00:08:34.498 ================ 00:08:34.498 Number of I/O Submission Queues: 64 00:08:34.498 Number of I/O Completion Queues: 64 00:08:34.498 00:08:34.498 ZNS Specific Controller Data 00:08:34.498 ============================ 00:08:34.498 Zone Append Size Limit: 0 00:08:34.498 00:08:34.498 00:08:34.498 Active Namespaces 00:08:34.498 ================= 00:08:34.498 Namespace ID:1 00:08:34.498 Error Recovery Timeout: Unlimited 00:08:34.498 Command Set Identifier: NVM (00h) 00:08:34.498 Deallocate: Supported 00:08:34.498 Deallocated/Unwritten Error: Supported 00:08:34.498 Deallocated Read Value: All 0x00 00:08:34.498 Deallocate in Write Zeroes: Not Supported 00:08:34.498 Deallocated Guard Field: 0xFFFF 00:08:34.498 Flush: Supported 00:08:34.498 Reservation: Not Supported 00:08:34.498 Namespace Sharing Capabilities: Private 00:08:34.498 Size (in LBAs): 1048576 (4GiB) 00:08:34.498 Capacity (in LBAs): 1048576 (4GiB) 00:08:34.498 Utilization (in LBAs): 1048576 (4GiB) 00:08:34.498 Thin Provisioning: Not Supported 00:08:34.498 Per-NS Atomic Units: No 00:08:34.498 Maximum Single Source Range Length: 128 00:08:34.498 Maximum Copy Length: 128 00:08:34.498 Maximum Source Range Count: 128 00:08:34.498 NGUID/EUI64 Never Reused: No 00:08:34.498 Namespace Write Protected: No 00:08:34.498 Number of LBA Formats: 8 00:08:34.498 Current LBA Format: LBA Format #04 00:08:34.498 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.498 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.498 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.498 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.498 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.498 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.498 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.498 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.498 00:08:34.498 NVM Specific Namespace Data 00:08:34.498 =========================== 00:08:34.498 Logical Block Storage Tag Mask: 0 00:08:34.498 Protection Information Capabilities: 00:08:34.498 16b Guard Protection Information Storage Tag Support: No 00:08:34.498 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.498 Storage Tag Check Read Support: No 00:08:34.498 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Namespace ID:2 00:08:34.498 Error Recovery Timeout: Unlimited 00:08:34.498 Command Set Identifier: NVM (00h) 00:08:34.498 Deallocate: Supported 00:08:34.498 Deallocated/Unwritten Error: Supported 00:08:34.498 Deallocated Read Value: All 0x00 00:08:34.498 Deallocate in Write Zeroes: Not Supported 00:08:34.498 Deallocated Guard Field: 0xFFFF 00:08:34.498 Flush: Supported 00:08:34.498 Reservation: Not Supported 00:08:34.498 Namespace Sharing Capabilities: Private 00:08:34.498 Size (in LBAs): 1048576 (4GiB) 00:08:34.498 Capacity (in LBAs): 1048576 (4GiB) 00:08:34.498 Utilization (in LBAs): 1048576 (4GiB) 00:08:34.498 Thin Provisioning: Not Supported 00:08:34.498 Per-NS Atomic Units: No 00:08:34.498 Maximum Single Source Range Length: 128 00:08:34.498 Maximum Copy Length: 128 00:08:34.498 Maximum Source Range Count: 128 00:08:34.498 NGUID/EUI64 Never Reused: No 00:08:34.498 Namespace Write Protected: No 00:08:34.498 Number of LBA Formats: 8 00:08:34.498 Current LBA Format: LBA Format #04 00:08:34.498 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.498 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.498 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.498 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.498 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.498 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.498 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.498 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.498 00:08:34.498 NVM Specific Namespace Data 00:08:34.498 =========================== 00:08:34.498 Logical Block Storage Tag Mask: 0 00:08:34.498 Protection Information Capabilities: 00:08:34.498 16b Guard Protection Information Storage Tag Support: No 00:08:34.498 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.498 Storage Tag Check Read Support: No 00:08:34.498 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.498 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Namespace ID:3 00:08:34.499 Error Recovery Timeout: Unlimited 00:08:34.499 Command Set Identifier: NVM (00h) 00:08:34.499 Deallocate: Supported 00:08:34.499 Deallocated/Unwritten Error: Supported 00:08:34.499 Deallocated Read Value: All 0x00 00:08:34.499 Deallocate in Write Zeroes: Not Supported 00:08:34.499 Deallocated Guard Field: 0xFFFF 00:08:34.499 Flush: Supported 00:08:34.499 Reservation: Not Supported 00:08:34.499 Namespace Sharing Capabilities: Private 00:08:34.499 Size (in LBAs): 1048576 (4GiB) 00:08:34.499 Capacity (in LBAs): 1048576 (4GiB) 00:08:34.499 Utilization (in LBAs): 1048576 (4GiB) 00:08:34.499 Thin Provisioning: Not Supported 00:08:34.499 Per-NS Atomic Units: No 00:08:34.499 Maximum Single Source Range Length: 128 00:08:34.499 Maximum Copy Length: 128 00:08:34.499 Maximum Source Range Count: 128 00:08:34.499 NGUID/EUI64 Never Reused: No 00:08:34.499 Namespace Write Protected: No 00:08:34.499 Number of LBA Formats: 8 00:08:34.499 Current LBA Format: LBA Format #04 00:08:34.499 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.499 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.499 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.499 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.499 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.499 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.499 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.499 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.499 00:08:34.499 NVM Specific Namespace Data 00:08:34.499 =========================== 00:08:34.499 Logical Block Storage Tag Mask: 0 00:08:34.499 Protection Information Capabilities: 00:08:34.499 16b Guard Protection Information Storage Tag Support: No 00:08:34.499 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.499 Storage Tag Check Read Support: No 00:08:34.499 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.499 21:50:19 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:34.499 21:50:19 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:34.814 ===================================================== 00:08:34.814 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:34.814 ===================================================== 00:08:34.814 Controller Capabilities/Features 00:08:34.814 ================================ 00:08:34.814 Vendor ID: 1b36 00:08:34.814 Subsystem Vendor ID: 1af4 00:08:34.814 Serial Number: 12343 00:08:34.814 Model Number: QEMU NVMe Ctrl 00:08:34.814 Firmware Version: 8.0.0 00:08:34.814 Recommended Arb Burst: 6 00:08:34.814 IEEE OUI Identifier: 00 54 52 00:08:34.814 Multi-path I/O 00:08:34.814 May have multiple subsystem ports: No 00:08:34.814 May have multiple controllers: Yes 00:08:34.814 Associated with SR-IOV VF: No 00:08:34.814 Max Data Transfer Size: 524288 00:08:34.814 Max Number of Namespaces: 256 00:08:34.814 Max Number of I/O Queues: 64 00:08:34.814 NVMe Specification Version (VS): 1.4 00:08:34.814 NVMe Specification Version (Identify): 1.4 00:08:34.814 Maximum Queue Entries: 2048 00:08:34.814 Contiguous Queues Required: Yes 00:08:34.814 Arbitration Mechanisms Supported 00:08:34.814 Weighted Round Robin: Not Supported 00:08:34.814 Vendor Specific: Not Supported 00:08:34.814 Reset Timeout: 7500 ms 00:08:34.814 Doorbell Stride: 4 bytes 00:08:34.814 NVM Subsystem Reset: Not Supported 00:08:34.814 Command Sets Supported 00:08:34.814 NVM Command Set: Supported 00:08:34.814 Boot Partition: Not Supported 00:08:34.814 Memory Page Size Minimum: 4096 bytes 00:08:34.814 Memory Page Size Maximum: 65536 bytes 00:08:34.814 Persistent Memory Region: Not Supported 00:08:34.814 Optional Asynchronous Events Supported 00:08:34.814 Namespace Attribute Notices: Supported 00:08:34.814 Firmware Activation Notices: Not Supported 00:08:34.814 ANA Change Notices: Not Supported 00:08:34.814 PLE Aggregate Log Change Notices: Not Supported 00:08:34.814 LBA Status Info Alert Notices: Not Supported 00:08:34.814 EGE Aggregate Log Change Notices: Not Supported 00:08:34.814 Normal NVM Subsystem Shutdown event: Not Supported 00:08:34.814 Zone Descriptor Change Notices: Not Supported 00:08:34.814 Discovery Log Change Notices: Not Supported 00:08:34.814 Controller Attributes 00:08:34.814 128-bit Host Identifier: Not Supported 00:08:34.814 Non-Operational Permissive Mode: Not Supported 00:08:34.814 NVM Sets: Not Supported 00:08:34.814 Read Recovery Levels: Not Supported 00:08:34.814 Endurance Groups: Supported 00:08:34.814 Predictable Latency Mode: Not Supported 00:08:34.814 Traffic Based Keep ALive: Not Supported 00:08:34.814 Namespace Granularity: Not Supported 00:08:34.814 SQ Associations: Not Supported 00:08:34.814 UUID List: Not Supported 00:08:34.814 Multi-Domain Subsystem: Not Supported 00:08:34.814 Fixed Capacity Management: Not Supported 00:08:34.814 Variable Capacity Management: Not Supported 00:08:34.815 Delete Endurance Group: Not Supported 00:08:34.815 Delete NVM Set: Not Supported 00:08:34.815 Extended LBA Formats Supported: Supported 00:08:34.815 Flexible Data Placement Supported: Supported 00:08:34.815 00:08:34.815 Controller Memory Buffer Support 00:08:34.815 ================================ 00:08:34.815 Supported: No 00:08:34.815 00:08:34.815 Persistent Memory Region Support 00:08:34.815 ================================ 00:08:34.815 Supported: No 00:08:34.815 00:08:34.815 Admin Command Set Attributes 00:08:34.815 ============================ 00:08:34.815 Security Send/Receive: Not Supported 00:08:34.815 Format NVM: Supported 00:08:34.815 Firmware Activate/Download: Not Supported 00:08:34.815 Namespace Management: Supported 00:08:34.815 Device Self-Test: Not Supported 00:08:34.815 Directives: Supported 00:08:34.815 NVMe-MI: Not Supported 00:08:34.815 Virtualization Management: Not Supported 00:08:34.815 Doorbell Buffer Config: Supported 00:08:34.815 Get LBA Status Capability: Not Supported 00:08:34.815 Command & Feature Lockdown Capability: Not Supported 00:08:34.815 Abort Command Limit: 4 00:08:34.815 Async Event Request Limit: 4 00:08:34.815 Number of Firmware Slots: N/A 00:08:34.815 Firmware Slot 1 Read-Only: N/A 00:08:34.815 Firmware Activation Without Reset: N/A 00:08:34.815 Multiple Update Detection Support: N/A 00:08:34.815 Firmware Update Granularity: No Information Provided 00:08:34.815 Per-Namespace SMART Log: Yes 00:08:34.815 Asymmetric Namespace Access Log Page: Not Supported 00:08:34.815 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:34.815 Command Effects Log Page: Supported 00:08:34.815 Get Log Page Extended Data: Supported 00:08:34.815 Telemetry Log Pages: Not Supported 00:08:34.815 Persistent Event Log Pages: Not Supported 00:08:34.815 Supported Log Pages Log Page: May Support 00:08:34.815 Commands Supported & Effects Log Page: Not Supported 00:08:34.815 Feature Identifiers & Effects Log Page:May Support 00:08:34.815 NVMe-MI Commands & Effects Log Page: May Support 00:08:34.815 Data Area 4 for Telemetry Log: Not Supported 00:08:34.815 Error Log Page Entries Supported: 1 00:08:34.815 Keep Alive: Not Supported 00:08:34.815 00:08:34.815 NVM Command Set Attributes 00:08:34.815 ========================== 00:08:34.815 Submission Queue Entry Size 00:08:34.815 Max: 64 00:08:34.815 Min: 64 00:08:34.815 Completion Queue Entry Size 00:08:34.815 Max: 16 00:08:34.815 Min: 16 00:08:34.815 Number of Namespaces: 256 00:08:34.815 Compare Command: Supported 00:08:34.815 Write Uncorrectable Command: Not Supported 00:08:34.815 Dataset Management Command: Supported 00:08:34.815 Write Zeroes Command: Supported 00:08:34.815 Set Features Save Field: Supported 00:08:34.815 Reservations: Not Supported 00:08:34.815 Timestamp: Supported 00:08:34.815 Copy: Supported 00:08:34.815 Volatile Write Cache: Present 00:08:34.815 Atomic Write Unit (Normal): 1 00:08:34.815 Atomic Write Unit (PFail): 1 00:08:34.815 Atomic Compare & Write Unit: 1 00:08:34.815 Fused Compare & Write: Not Supported 00:08:34.815 Scatter-Gather List 00:08:34.815 SGL Command Set: Supported 00:08:34.815 SGL Keyed: Not Supported 00:08:34.815 SGL Bit Bucket Descriptor: Not Supported 00:08:34.815 SGL Metadata Pointer: Not Supported 00:08:34.815 Oversized SGL: Not Supported 00:08:34.815 SGL Metadata Address: Not Supported 00:08:34.815 SGL Offset: Not Supported 00:08:34.815 Transport SGL Data Block: Not Supported 00:08:34.815 Replay Protected Memory Block: Not Supported 00:08:34.815 00:08:34.815 Firmware Slot Information 00:08:34.815 ========================= 00:08:34.815 Active slot: 1 00:08:34.815 Slot 1 Firmware Revision: 1.0 00:08:34.815 00:08:34.815 00:08:34.815 Commands Supported and Effects 00:08:34.815 ============================== 00:08:34.815 Admin Commands 00:08:34.815 -------------- 00:08:34.815 Delete I/O Submission Queue (00h): Supported 00:08:34.815 Create I/O Submission Queue (01h): Supported 00:08:34.815 Get Log Page (02h): Supported 00:08:34.815 Delete I/O Completion Queue (04h): Supported 00:08:34.815 Create I/O Completion Queue (05h): Supported 00:08:34.815 Identify (06h): Supported 00:08:34.815 Abort (08h): Supported 00:08:34.815 Set Features (09h): Supported 00:08:34.815 Get Features (0Ah): Supported 00:08:34.815 Asynchronous Event Request (0Ch): Supported 00:08:34.815 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:34.815 Directive Send (19h): Supported 00:08:34.815 Directive Receive (1Ah): Supported 00:08:34.815 Virtualization Management (1Ch): Supported 00:08:34.815 Doorbell Buffer Config (7Ch): Supported 00:08:34.815 Format NVM (80h): Supported LBA-Change 00:08:34.815 I/O Commands 00:08:34.815 ------------ 00:08:34.815 Flush (00h): Supported LBA-Change 00:08:34.815 Write (01h): Supported LBA-Change 00:08:34.815 Read (02h): Supported 00:08:34.815 Compare (05h): Supported 00:08:34.815 Write Zeroes (08h): Supported LBA-Change 00:08:34.815 Dataset Management (09h): Supported LBA-Change 00:08:34.815 Unknown (0Ch): Supported 00:08:34.815 Unknown (12h): Supported 00:08:34.815 Copy (19h): Supported LBA-Change 00:08:34.815 Unknown (1Dh): Supported LBA-Change 00:08:34.815 00:08:34.815 Error Log 00:08:34.815 ========= 00:08:34.815 00:08:34.815 Arbitration 00:08:34.815 =========== 00:08:34.815 Arbitration Burst: no limit 00:08:34.815 00:08:34.815 Power Management 00:08:34.815 ================ 00:08:34.815 Number of Power States: 1 00:08:34.815 Current Power State: Power State #0 00:08:34.815 Power State #0: 00:08:34.815 Max Power: 25.00 W 00:08:34.815 Non-Operational State: Operational 00:08:34.815 Entry Latency: 16 microseconds 00:08:34.815 Exit Latency: 4 microseconds 00:08:34.815 Relative Read Throughput: 0 00:08:34.815 Relative Read Latency: 0 00:08:34.815 Relative Write Throughput: 0 00:08:34.815 Relative Write Latency: 0 00:08:34.815 Idle Power: Not Reported 00:08:34.815 Active Power: Not Reported 00:08:34.815 Non-Operational Permissive Mode: Not Supported 00:08:34.815 00:08:34.815 Health Information 00:08:34.815 ================== 00:08:34.815 Critical Warnings: 00:08:34.815 Available Spare Space: OK 00:08:34.815 Temperature: OK 00:08:34.815 Device Reliability: OK 00:08:34.815 Read Only: No 00:08:34.815 Volatile Memory Backup: OK 00:08:34.815 Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.815 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:34.815 Available Spare: 0% 00:08:34.815 Available Spare Threshold: 0% 00:08:34.815 Life Percentage Used: 0% 00:08:34.815 Data Units Read: 761 00:08:34.815 Data Units Written: 690 00:08:34.815 Host Read Commands: 32909 00:08:34.815 Host Write Commands: 32332 00:08:34.815 Controller Busy Time: 0 minutes 00:08:34.815 Power Cycles: 0 00:08:34.815 Power On Hours: 0 hours 00:08:34.815 Unsafe Shutdowns: 0 00:08:34.815 Unrecoverable Media Errors: 0 00:08:34.815 Lifetime Error Log Entries: 0 00:08:34.815 Warning Temperature Time: 0 minutes 00:08:34.815 Critical Temperature Time: 0 minutes 00:08:34.815 00:08:34.815 Number of Queues 00:08:34.815 ================ 00:08:34.815 Number of I/O Submission Queues: 64 00:08:34.815 Number of I/O Completion Queues: 64 00:08:34.815 00:08:34.815 ZNS Specific Controller Data 00:08:34.815 ============================ 00:08:34.815 Zone Append Size Limit: 0 00:08:34.815 00:08:34.815 00:08:34.815 Active Namespaces 00:08:34.815 ================= 00:08:34.815 Namespace ID:1 00:08:34.815 Error Recovery Timeout: Unlimited 00:08:34.815 Command Set Identifier: NVM (00h) 00:08:34.815 Deallocate: Supported 00:08:34.815 Deallocated/Unwritten Error: Supported 00:08:34.815 Deallocated Read Value: All 0x00 00:08:34.815 Deallocate in Write Zeroes: Not Supported 00:08:34.815 Deallocated Guard Field: 0xFFFF 00:08:34.815 Flush: Supported 00:08:34.815 Reservation: Not Supported 00:08:34.815 Namespace Sharing Capabilities: Multiple Controllers 00:08:34.815 Size (in LBAs): 262144 (1GiB) 00:08:34.815 Capacity (in LBAs): 262144 (1GiB) 00:08:34.815 Utilization (in LBAs): 262144 (1GiB) 00:08:34.815 Thin Provisioning: Not Supported 00:08:34.815 Per-NS Atomic Units: No 00:08:34.815 Maximum Single Source Range Length: 128 00:08:34.815 Maximum Copy Length: 128 00:08:34.815 Maximum Source Range Count: 128 00:08:34.815 NGUID/EUI64 Never Reused: No 00:08:34.815 Namespace Write Protected: No 00:08:34.815 Endurance group ID: 1 00:08:34.815 Number of LBA Formats: 8 00:08:34.815 Current LBA Format: LBA Format #04 00:08:34.815 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:34.815 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:34.815 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:34.815 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:34.815 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:34.816 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:34.816 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:34.816 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:34.816 00:08:34.816 Get Feature FDP: 00:08:34.816 ================ 00:08:34.816 Enabled: Yes 00:08:34.816 FDP configuration index: 0 00:08:34.816 00:08:34.816 FDP configurations log page 00:08:34.816 =========================== 00:08:34.816 Number of FDP configurations: 1 00:08:34.816 Version: 0 00:08:34.816 Size: 112 00:08:34.816 FDP Configuration Descriptor: 0 00:08:34.816 Descriptor Size: 96 00:08:34.816 Reclaim Group Identifier format: 2 00:08:34.816 FDP Volatile Write Cache: Not Present 00:08:34.816 FDP Configuration: Valid 00:08:34.816 Vendor Specific Size: 0 00:08:34.816 Number of Reclaim Groups: 2 00:08:34.816 Number of Recalim Unit Handles: 8 00:08:34.816 Max Placement Identifiers: 128 00:08:34.816 Number of Namespaces Suppprted: 256 00:08:34.816 Reclaim unit Nominal Size: 6000000 bytes 00:08:34.816 Estimated Reclaim Unit Time Limit: Not Reported 00:08:34.816 RUH Desc #000: RUH Type: Initially Isolated 00:08:34.816 RUH Desc #001: RUH Type: Initially Isolated 00:08:34.816 RUH Desc #002: RUH Type: Initially Isolated 00:08:34.816 RUH Desc #003: RUH Type: Initially Isolated 00:08:34.816 RUH Desc #004: RUH Type: Initially Isolated 00:08:34.816 RUH Desc #005: RUH Type: Initially Isolated 00:08:34.816 RUH Desc #006: RUH Type: Initially Isolated 00:08:34.816 RUH Desc #007: RUH Type: Initially Isolated 00:08:34.816 00:08:34.816 FDP reclaim unit handle usage log page 00:08:34.816 ====================================== 00:08:34.816 Number of Reclaim Unit Handles: 8 00:08:34.816 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:34.816 RUH Usage Desc #001: RUH Attributes: Unused 00:08:34.816 RUH Usage Desc #002: RUH Attributes: Unused 00:08:34.816 RUH Usage Desc #003: RUH Attributes: Unused 00:08:34.816 RUH Usage Desc #004: RUH Attributes: Unused 00:08:34.816 RUH Usage Desc #005: RUH Attributes: Unused 00:08:34.816 RUH Usage Desc #006: RUH Attributes: Unused 00:08:34.816 RUH Usage Desc #007: RUH Attributes: Unused 00:08:34.816 00:08:34.816 FDP statistics log page 00:08:34.816 ======================= 00:08:34.816 Host bytes with metadata written: 426770432 00:08:34.816 Media bytes with metadata written: 426856448 00:08:34.816 Media bytes erased: 0 00:08:34.816 00:08:34.816 FDP events log page 00:08:34.816 =================== 00:08:34.816 Number of FDP events: 0 00:08:34.816 00:08:34.816 NVM Specific Namespace Data 00:08:34.816 =========================== 00:08:34.816 Logical Block Storage Tag Mask: 0 00:08:34.816 Protection Information Capabilities: 00:08:34.816 16b Guard Protection Information Storage Tag Support: No 00:08:34.816 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:34.816 Storage Tag Check Read Support: No 00:08:34.816 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.816 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.816 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.816 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.816 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.816 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.816 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.816 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:34.816 00:08:34.816 real 0m1.218s 00:08:34.816 user 0m0.402s 00:08:34.816 sys 0m0.589s 00:08:34.816 21:50:19 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.816 ************************************ 00:08:34.816 END TEST nvme_identify 00:08:34.816 ************************************ 00:08:34.816 21:50:19 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:34.816 21:50:19 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:34.816 21:50:19 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.816 21:50:19 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.816 21:50:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.816 ************************************ 00:08:34.816 START TEST nvme_perf 00:08:34.816 ************************************ 00:08:34.816 21:50:19 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:34.816 21:50:19 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:36.204 Initializing NVMe Controllers 00:08:36.204 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:36.204 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:36.204 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:36.204 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:36.204 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:36.204 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:36.204 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:36.204 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:36.204 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:36.204 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:36.204 Initialization complete. Launching workers. 00:08:36.204 ======================================================== 00:08:36.204 Latency(us) 00:08:36.204 Device Information : IOPS MiB/s Average min max 00:08:36.204 PCIE (0000:00:11.0) NSID 1 from core 0: 6660.86 78.06 19252.88 13766.74 37173.42 00:08:36.204 PCIE (0000:00:13.0) NSID 1 from core 0: 6660.86 78.06 19246.10 13073.40 36735.17 00:08:36.204 PCIE (0000:00:10.0) NSID 1 from core 0: 6660.86 78.06 19228.53 12281.90 37556.11 00:08:36.204 PCIE (0000:00:12.0) NSID 1 from core 0: 6660.86 78.06 19212.96 11244.85 37414.29 00:08:36.204 PCIE (0000:00:12.0) NSID 2 from core 0: 6660.86 78.06 19194.38 8641.18 37982.33 00:08:36.204 PCIE (0000:00:12.0) NSID 3 from core 0: 6724.30 78.80 18993.47 7718.93 28449.11 00:08:36.204 ======================================================== 00:08:36.204 Total : 40028.63 469.09 19187.75 7718.93 37982.33 00:08:36.204 00:08:36.204 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:36.204 ================================================================================= 00:08:36.204 1.00000% : 14518.745us 00:08:36.204 10.00000% : 16232.763us 00:08:36.204 25.00000% : 17543.483us 00:08:36.204 50.00000% : 19156.677us 00:08:36.204 75.00000% : 20669.046us 00:08:36.204 90.00000% : 21878.942us 00:08:36.204 95.00000% : 22988.012us 00:08:36.204 98.00000% : 24802.855us 00:08:36.204 99.00000% : 28029.243us 00:08:36.204 99.50000% : 36700.160us 00:08:36.204 99.90000% : 37103.458us 00:08:36.204 99.99000% : 37305.108us 00:08:36.204 99.99900% : 37305.108us 00:08:36.204 99.99990% : 37305.108us 00:08:36.204 99.99999% : 37305.108us 00:08:36.204 00:08:36.204 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:36.204 ================================================================================= 00:08:36.204 1.00000% : 14216.271us 00:08:36.204 10.00000% : 16131.938us 00:08:36.204 25.00000% : 17644.308us 00:08:36.204 50.00000% : 19156.677us 00:08:36.204 75.00000% : 20669.046us 00:08:36.204 90.00000% : 22080.591us 00:08:36.204 95.00000% : 23290.486us 00:08:36.204 98.00000% : 25105.329us 00:08:36.204 99.00000% : 28029.243us 00:08:36.204 99.50000% : 36095.212us 00:08:36.204 99.90000% : 36700.160us 00:08:36.204 99.99000% : 36901.809us 00:08:36.204 99.99900% : 36901.809us 00:08:36.204 99.99990% : 36901.809us 00:08:36.204 99.99999% : 36901.809us 00:08:36.204 00:08:36.204 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:36.204 ================================================================================= 00:08:36.204 1.00000% : 13812.972us 00:08:36.204 10.00000% : 16131.938us 00:08:36.204 25.00000% : 17543.483us 00:08:36.204 50.00000% : 19257.502us 00:08:36.204 75.00000% : 20669.046us 00:08:36.204 90.00000% : 22080.591us 00:08:36.204 95.00000% : 23088.837us 00:08:36.204 98.00000% : 24903.680us 00:08:36.204 99.00000% : 27625.945us 00:08:36.204 99.50000% : 36498.511us 00:08:36.204 99.90000% : 37305.108us 00:08:36.204 99.99000% : 37708.406us 00:08:36.204 99.99900% : 37708.406us 00:08:36.204 99.99990% : 37708.406us 00:08:36.204 99.99999% : 37708.406us 00:08:36.204 00:08:36.204 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:36.204 ================================================================================= 00:08:36.204 1.00000% : 13208.025us 00:08:36.204 10.00000% : 16333.588us 00:08:36.204 25.00000% : 17543.483us 00:08:36.204 50.00000% : 19257.502us 00:08:36.204 75.00000% : 20568.222us 00:08:36.204 90.00000% : 21979.766us 00:08:36.204 95.00000% : 23088.837us 00:08:36.204 98.00000% : 24601.206us 00:08:36.204 99.00000% : 27222.646us 00:08:36.205 99.50000% : 36498.511us 00:08:36.205 99.90000% : 37305.108us 00:08:36.205 99.99000% : 37506.757us 00:08:36.205 99.99900% : 37506.757us 00:08:36.205 99.99990% : 37506.757us 00:08:36.205 99.99999% : 37506.757us 00:08:36.205 00:08:36.205 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:36.205 ================================================================================= 00:08:36.205 1.00000% : 13913.797us 00:08:36.205 10.00000% : 16232.763us 00:08:36.205 25.00000% : 17442.658us 00:08:36.205 50.00000% : 19257.502us 00:08:36.205 75.00000% : 20568.222us 00:08:36.205 90.00000% : 22080.591us 00:08:36.205 95.00000% : 23088.837us 00:08:36.205 98.00000% : 24802.855us 00:08:36.205 99.00000% : 28230.892us 00:08:36.205 99.50000% : 37103.458us 00:08:36.205 99.90000% : 37910.055us 00:08:36.205 99.99000% : 38111.705us 00:08:36.205 99.99900% : 38111.705us 00:08:36.205 99.99990% : 38111.705us 00:08:36.205 99.99999% : 38111.705us 00:08:36.205 00:08:36.205 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:36.205 ================================================================================= 00:08:36.205 1.00000% : 14115.446us 00:08:36.205 10.00000% : 16232.763us 00:08:36.205 25.00000% : 17442.658us 00:08:36.205 50.00000% : 19055.852us 00:08:36.205 75.00000% : 20568.222us 00:08:36.205 90.00000% : 21979.766us 00:08:36.205 95.00000% : 22887.188us 00:08:36.205 98.00000% : 23895.434us 00:08:36.205 99.00000% : 25105.329us 00:08:36.205 99.50000% : 27827.594us 00:08:36.205 99.90000% : 28432.542us 00:08:36.205 99.99000% : 28634.191us 00:08:36.205 99.99900% : 28634.191us 00:08:36.205 99.99990% : 28634.191us 00:08:36.205 99.99999% : 28634.191us 00:08:36.205 00:08:36.205 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:36.205 ============================================================================== 00:08:36.205 Range in us Cumulative IO count 00:08:36.205 13712.148 - 13812.972: 0.0446% ( 3) 00:08:36.205 13812.972 - 13913.797: 0.1190% ( 5) 00:08:36.205 13913.797 - 14014.622: 0.2083% ( 6) 00:08:36.205 14014.622 - 14115.446: 0.3274% ( 8) 00:08:36.205 14115.446 - 14216.271: 0.4762% ( 10) 00:08:36.205 14216.271 - 14317.095: 0.6399% ( 11) 00:08:36.205 14317.095 - 14417.920: 0.9226% ( 19) 00:08:36.205 14417.920 - 14518.745: 1.3244% ( 27) 00:08:36.205 14518.745 - 14619.569: 1.7411% ( 28) 00:08:36.205 14619.569 - 14720.394: 2.0982% ( 24) 00:08:36.205 14720.394 - 14821.218: 2.4851% ( 26) 00:08:36.205 14821.218 - 14922.043: 2.9464% ( 31) 00:08:36.205 14922.043 - 15022.868: 3.3631% ( 28) 00:08:36.205 15022.868 - 15123.692: 3.8095% ( 30) 00:08:36.205 15123.692 - 15224.517: 4.2411% ( 29) 00:08:36.205 15224.517 - 15325.342: 4.7173% ( 32) 00:08:36.205 15325.342 - 15426.166: 5.3869% ( 45) 00:08:36.205 15426.166 - 15526.991: 6.1458% ( 51) 00:08:36.205 15526.991 - 15627.815: 6.8601% ( 48) 00:08:36.205 15627.815 - 15728.640: 7.4554% ( 40) 00:08:36.205 15728.640 - 15829.465: 8.1250% ( 45) 00:08:36.205 15829.465 - 15930.289: 8.7649% ( 43) 00:08:36.205 15930.289 - 16031.114: 9.4048% ( 43) 00:08:36.205 16031.114 - 16131.938: 9.9702% ( 38) 00:08:36.205 16131.938 - 16232.763: 10.4762% ( 34) 00:08:36.205 16232.763 - 16333.588: 11.0565% ( 39) 00:08:36.205 16333.588 - 16434.412: 11.7411% ( 46) 00:08:36.205 16434.412 - 16535.237: 12.4554% ( 48) 00:08:36.205 16535.237 - 16636.062: 13.3929% ( 63) 00:08:36.205 16636.062 - 16736.886: 14.4048% ( 68) 00:08:36.205 16736.886 - 16837.711: 15.5060% ( 74) 00:08:36.205 16837.711 - 16938.535: 16.7113% ( 81) 00:08:36.205 16938.535 - 17039.360: 18.0060% ( 87) 00:08:36.205 17039.360 - 17140.185: 19.3750% ( 92) 00:08:36.205 17140.185 - 17241.009: 20.7440% ( 92) 00:08:36.205 17241.009 - 17341.834: 22.2917% ( 104) 00:08:36.205 17341.834 - 17442.658: 23.7946% ( 101) 00:08:36.205 17442.658 - 17543.483: 25.3423% ( 104) 00:08:36.205 17543.483 - 17644.308: 26.9940% ( 111) 00:08:36.205 17644.308 - 17745.132: 28.7202% ( 116) 00:08:36.205 17745.132 - 17845.957: 30.5952% ( 126) 00:08:36.205 17845.957 - 17946.782: 32.3958% ( 121) 00:08:36.205 17946.782 - 18047.606: 34.1369% ( 117) 00:08:36.205 18047.606 - 18148.431: 35.9375% ( 121) 00:08:36.205 18148.431 - 18249.255: 37.5298% ( 107) 00:08:36.205 18249.255 - 18350.080: 38.9286% ( 94) 00:08:36.205 18350.080 - 18450.905: 40.3571% ( 96) 00:08:36.205 18450.905 - 18551.729: 41.6518% ( 87) 00:08:36.205 18551.729 - 18652.554: 43.1696% ( 102) 00:08:36.205 18652.554 - 18753.378: 44.5982% ( 96) 00:08:36.205 18753.378 - 18854.203: 46.1161% ( 102) 00:08:36.205 18854.203 - 18955.028: 47.6786% ( 105) 00:08:36.205 18955.028 - 19055.852: 49.1369% ( 98) 00:08:36.205 19055.852 - 19156.677: 50.6845% ( 104) 00:08:36.205 19156.677 - 19257.502: 52.2321% ( 104) 00:08:36.205 19257.502 - 19358.326: 53.8393% ( 108) 00:08:36.205 19358.326 - 19459.151: 55.3869% ( 104) 00:08:36.205 19459.151 - 19559.975: 56.9494% ( 105) 00:08:36.205 19559.975 - 19660.800: 58.5863% ( 110) 00:08:36.205 19660.800 - 19761.625: 60.2530% ( 112) 00:08:36.205 19761.625 - 19862.449: 61.8899% ( 110) 00:08:36.205 19862.449 - 19963.274: 63.7202% ( 123) 00:08:36.205 19963.274 - 20064.098: 65.3869% ( 112) 00:08:36.205 20064.098 - 20164.923: 67.0536% ( 112) 00:08:36.205 20164.923 - 20265.748: 68.7054% ( 111) 00:08:36.205 20265.748 - 20366.572: 70.2976% ( 107) 00:08:36.205 20366.572 - 20467.397: 71.8155% ( 102) 00:08:36.205 20467.397 - 20568.222: 73.4077% ( 107) 00:08:36.205 20568.222 - 20669.046: 75.0744% ( 112) 00:08:36.205 20669.046 - 20769.871: 76.7560% ( 113) 00:08:36.205 20769.871 - 20870.695: 78.3036% ( 104) 00:08:36.205 20870.695 - 20971.520: 79.6577% ( 91) 00:08:36.205 20971.520 - 21072.345: 81.0119% ( 91) 00:08:36.205 21072.345 - 21173.169: 82.1875% ( 79) 00:08:36.205 21173.169 - 21273.994: 83.3780% ( 80) 00:08:36.205 21273.994 - 21374.818: 84.4940% ( 75) 00:08:36.205 21374.818 - 21475.643: 85.5952% ( 74) 00:08:36.205 21475.643 - 21576.468: 86.7708% ( 79) 00:08:36.205 21576.468 - 21677.292: 87.9315% ( 78) 00:08:36.205 21677.292 - 21778.117: 88.9583% ( 69) 00:08:36.205 21778.117 - 21878.942: 90.0149% ( 71) 00:08:36.205 21878.942 - 21979.766: 90.7887% ( 52) 00:08:36.205 21979.766 - 22080.591: 91.4435% ( 44) 00:08:36.205 22080.591 - 22181.415: 91.9792% ( 36) 00:08:36.205 22181.415 - 22282.240: 92.4851% ( 34) 00:08:36.205 22282.240 - 22383.065: 93.0060% ( 35) 00:08:36.205 22383.065 - 22483.889: 93.4524% ( 30) 00:08:36.205 22483.889 - 22584.714: 93.8393% ( 26) 00:08:36.205 22584.714 - 22685.538: 94.2411% ( 27) 00:08:36.205 22685.538 - 22786.363: 94.5387% ( 20) 00:08:36.205 22786.363 - 22887.188: 94.7619% ( 15) 00:08:36.205 22887.188 - 22988.012: 95.1042% ( 23) 00:08:36.205 22988.012 - 23088.837: 95.3869% ( 19) 00:08:36.205 23088.837 - 23189.662: 95.6548% ( 18) 00:08:36.205 23189.662 - 23290.486: 95.9821% ( 22) 00:08:36.205 23290.486 - 23391.311: 96.2649% ( 19) 00:08:36.205 23391.311 - 23492.135: 96.5030% ( 16) 00:08:36.205 23492.135 - 23592.960: 96.7113% ( 14) 00:08:36.205 23592.960 - 23693.785: 96.9494% ( 16) 00:08:36.205 23693.785 - 23794.609: 97.1131% ( 11) 00:08:36.205 23794.609 - 23895.434: 97.2917% ( 12) 00:08:36.205 23895.434 - 23996.258: 97.4702% ( 12) 00:08:36.205 23996.258 - 24097.083: 97.6042% ( 9) 00:08:36.205 24097.083 - 24197.908: 97.6935% ( 6) 00:08:36.205 24197.908 - 24298.732: 97.7530% ( 4) 00:08:36.205 24298.732 - 24399.557: 97.8125% ( 4) 00:08:36.205 24399.557 - 24500.382: 97.8720% ( 4) 00:08:36.205 24500.382 - 24601.206: 97.9315% ( 4) 00:08:36.205 24601.206 - 24702.031: 97.9911% ( 4) 00:08:36.205 24702.031 - 24802.855: 98.0357% ( 3) 00:08:36.205 24802.855 - 24903.680: 98.0952% ( 4) 00:08:36.205 26617.698 - 26819.348: 98.1548% ( 4) 00:08:36.205 26819.348 - 27020.997: 98.3036% ( 10) 00:08:36.205 27020.997 - 27222.646: 98.4673% ( 11) 00:08:36.205 27222.646 - 27424.295: 98.6458% ( 12) 00:08:36.205 27424.295 - 27625.945: 98.8095% ( 11) 00:08:36.205 27625.945 - 27827.594: 98.9732% ( 11) 00:08:36.205 27827.594 - 28029.243: 99.0476% ( 5) 00:08:36.205 35893.563 - 36095.212: 99.1518% ( 7) 00:08:36.205 36095.212 - 36296.862: 99.3006% ( 10) 00:08:36.205 36296.862 - 36498.511: 99.4643% ( 11) 00:08:36.205 36498.511 - 36700.160: 99.6280% ( 11) 00:08:36.205 36700.160 - 36901.809: 99.7917% ( 11) 00:08:36.205 36901.809 - 37103.458: 99.9554% ( 11) 00:08:36.205 37103.458 - 37305.108: 100.0000% ( 3) 00:08:36.205 00:08:36.205 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:36.205 ============================================================================== 00:08:36.205 Range in us Cumulative IO count 00:08:36.205 13006.375 - 13107.200: 0.0298% ( 2) 00:08:36.205 13107.200 - 13208.025: 0.1042% ( 5) 00:08:36.205 13208.025 - 13308.849: 0.1786% ( 5) 00:08:36.205 13308.849 - 13409.674: 0.2530% ( 5) 00:08:36.205 13409.674 - 13510.498: 0.3274% ( 5) 00:08:36.205 13510.498 - 13611.323: 0.4167% ( 6) 00:08:36.205 13611.323 - 13712.148: 0.4911% ( 5) 00:08:36.205 13712.148 - 13812.972: 0.5506% ( 4) 00:08:36.205 13812.972 - 13913.797: 0.6399% ( 6) 00:08:36.205 13913.797 - 14014.622: 0.7738% ( 9) 00:08:36.205 14014.622 - 14115.446: 0.9673% ( 13) 00:08:36.205 14115.446 - 14216.271: 1.2054% ( 16) 00:08:36.205 14216.271 - 14317.095: 1.4881% ( 19) 00:08:36.205 14317.095 - 14417.920: 1.7113% ( 15) 00:08:36.205 14417.920 - 14518.745: 1.9494% ( 16) 00:08:36.205 14518.745 - 14619.569: 2.2173% ( 18) 00:08:36.205 14619.569 - 14720.394: 2.4107% ( 13) 00:08:36.205 14720.394 - 14821.218: 2.6339% ( 15) 00:08:36.205 14821.218 - 14922.043: 2.8571% ( 15) 00:08:36.205 14922.043 - 15022.868: 3.2440% ( 26) 00:08:36.205 15022.868 - 15123.692: 3.6161% ( 25) 00:08:36.206 15123.692 - 15224.517: 4.1964% ( 39) 00:08:36.206 15224.517 - 15325.342: 4.8958% ( 47) 00:08:36.206 15325.342 - 15426.166: 5.6845% ( 53) 00:08:36.206 15426.166 - 15526.991: 6.2946% ( 41) 00:08:36.206 15526.991 - 15627.815: 6.8750% ( 39) 00:08:36.206 15627.815 - 15728.640: 7.4554% ( 39) 00:08:36.206 15728.640 - 15829.465: 8.2440% ( 53) 00:08:36.206 15829.465 - 15930.289: 9.0625% ( 55) 00:08:36.206 15930.289 - 16031.114: 9.8363% ( 52) 00:08:36.206 16031.114 - 16131.938: 10.6548% ( 55) 00:08:36.206 16131.938 - 16232.763: 11.5774% ( 62) 00:08:36.206 16232.763 - 16333.588: 12.5149% ( 63) 00:08:36.206 16333.588 - 16434.412: 13.3929% ( 59) 00:08:36.206 16434.412 - 16535.237: 14.1667% ( 52) 00:08:36.206 16535.237 - 16636.062: 14.9405% ( 52) 00:08:36.206 16636.062 - 16736.886: 15.8929% ( 64) 00:08:36.206 16736.886 - 16837.711: 16.8452% ( 64) 00:08:36.206 16837.711 - 16938.535: 17.8571% ( 68) 00:08:36.206 16938.535 - 17039.360: 18.8839% ( 69) 00:08:36.206 17039.360 - 17140.185: 19.9256% ( 70) 00:08:36.206 17140.185 - 17241.009: 20.9524% ( 69) 00:08:36.206 17241.009 - 17341.834: 22.1280% ( 79) 00:08:36.206 17341.834 - 17442.658: 23.1548% ( 69) 00:08:36.206 17442.658 - 17543.483: 24.4196% ( 85) 00:08:36.206 17543.483 - 17644.308: 25.7738% ( 91) 00:08:36.206 17644.308 - 17745.132: 27.4256% ( 111) 00:08:36.206 17745.132 - 17845.957: 28.9732% ( 104) 00:08:36.206 17845.957 - 17946.782: 30.5357% ( 105) 00:08:36.206 17946.782 - 18047.606: 32.2470% ( 115) 00:08:36.206 18047.606 - 18148.431: 33.8095% ( 105) 00:08:36.206 18148.431 - 18249.255: 35.5208% ( 115) 00:08:36.206 18249.255 - 18350.080: 37.2768% ( 118) 00:08:36.206 18350.080 - 18450.905: 39.0327% ( 118) 00:08:36.206 18450.905 - 18551.729: 40.9226% ( 127) 00:08:36.206 18551.729 - 18652.554: 42.9464% ( 136) 00:08:36.206 18652.554 - 18753.378: 44.8214% ( 126) 00:08:36.206 18753.378 - 18854.203: 46.7113% ( 127) 00:08:36.206 18854.203 - 18955.028: 48.4077% ( 114) 00:08:36.206 18955.028 - 19055.852: 49.8512% ( 97) 00:08:36.206 19055.852 - 19156.677: 51.3244% ( 99) 00:08:36.206 19156.677 - 19257.502: 52.8423% ( 102) 00:08:36.206 19257.502 - 19358.326: 54.6280% ( 120) 00:08:36.206 19358.326 - 19459.151: 56.2946% ( 112) 00:08:36.206 19459.151 - 19559.975: 57.9613% ( 112) 00:08:36.206 19559.975 - 19660.800: 59.4494% ( 100) 00:08:36.206 19660.800 - 19761.625: 61.1012% ( 111) 00:08:36.206 19761.625 - 19862.449: 62.8869% ( 120) 00:08:36.206 19862.449 - 19963.274: 64.9554% ( 139) 00:08:36.206 19963.274 - 20064.098: 66.7857% ( 123) 00:08:36.206 20064.098 - 20164.923: 68.5268% ( 117) 00:08:36.206 20164.923 - 20265.748: 70.3274% ( 121) 00:08:36.206 20265.748 - 20366.572: 71.9792% ( 111) 00:08:36.206 20366.572 - 20467.397: 73.3780% ( 94) 00:08:36.206 20467.397 - 20568.222: 74.6280% ( 84) 00:08:36.206 20568.222 - 20669.046: 75.9226% ( 87) 00:08:36.206 20669.046 - 20769.871: 77.2321% ( 88) 00:08:36.206 20769.871 - 20870.695: 78.5417% ( 88) 00:08:36.206 20870.695 - 20971.520: 79.7321% ( 80) 00:08:36.206 20971.520 - 21072.345: 81.0119% ( 86) 00:08:36.206 21072.345 - 21173.169: 82.0982% ( 73) 00:08:36.206 21173.169 - 21273.994: 83.0060% ( 61) 00:08:36.206 21273.994 - 21374.818: 83.9881% ( 66) 00:08:36.206 21374.818 - 21475.643: 84.9107% ( 62) 00:08:36.206 21475.643 - 21576.468: 85.9970% ( 73) 00:08:36.206 21576.468 - 21677.292: 87.0685% ( 72) 00:08:36.206 21677.292 - 21778.117: 88.1548% ( 73) 00:08:36.206 21778.117 - 21878.942: 89.1071% ( 64) 00:08:36.206 21878.942 - 21979.766: 89.9107% ( 54) 00:08:36.206 21979.766 - 22080.591: 90.4464% ( 36) 00:08:36.206 22080.591 - 22181.415: 90.9673% ( 35) 00:08:36.206 22181.415 - 22282.240: 91.5327% ( 38) 00:08:36.206 22282.240 - 22383.065: 92.0089% ( 32) 00:08:36.206 22383.065 - 22483.889: 92.4256% ( 28) 00:08:36.206 22483.889 - 22584.714: 92.8423% ( 28) 00:08:36.206 22584.714 - 22685.538: 93.1548% ( 21) 00:08:36.206 22685.538 - 22786.363: 93.4375% ( 19) 00:08:36.206 22786.363 - 22887.188: 93.8095% ( 25) 00:08:36.206 22887.188 - 22988.012: 94.1369% ( 22) 00:08:36.206 22988.012 - 23088.837: 94.4940% ( 24) 00:08:36.206 23088.837 - 23189.662: 94.8214% ( 22) 00:08:36.206 23189.662 - 23290.486: 95.1786% ( 24) 00:08:36.206 23290.486 - 23391.311: 95.5208% ( 23) 00:08:36.206 23391.311 - 23492.135: 95.8631% ( 23) 00:08:36.206 23492.135 - 23592.960: 96.2351% ( 25) 00:08:36.206 23592.960 - 23693.785: 96.5923% ( 24) 00:08:36.206 23693.785 - 23794.609: 96.8750% ( 19) 00:08:36.206 23794.609 - 23895.434: 97.0982% ( 15) 00:08:36.206 23895.434 - 23996.258: 97.3065% ( 14) 00:08:36.206 23996.258 - 24097.083: 97.4256% ( 8) 00:08:36.206 24097.083 - 24197.908: 97.5149% ( 6) 00:08:36.206 24197.908 - 24298.732: 97.5744% ( 4) 00:08:36.206 24298.732 - 24399.557: 97.6339% ( 4) 00:08:36.206 24399.557 - 24500.382: 97.6935% ( 4) 00:08:36.206 24500.382 - 24601.206: 97.7381% ( 3) 00:08:36.206 24601.206 - 24702.031: 97.7827% ( 3) 00:08:36.206 24702.031 - 24802.855: 97.8423% ( 4) 00:08:36.206 24802.855 - 24903.680: 97.9018% ( 4) 00:08:36.206 24903.680 - 25004.505: 97.9613% ( 4) 00:08:36.206 25004.505 - 25105.329: 98.0208% ( 4) 00:08:36.206 25105.329 - 25206.154: 98.0506% ( 2) 00:08:36.206 25206.154 - 25306.978: 98.0952% ( 3) 00:08:36.206 26617.698 - 26819.348: 98.1696% ( 5) 00:08:36.206 26819.348 - 27020.997: 98.3185% ( 10) 00:08:36.206 27020.997 - 27222.646: 98.4673% ( 10) 00:08:36.206 27222.646 - 27424.295: 98.6310% ( 11) 00:08:36.206 27424.295 - 27625.945: 98.7946% ( 11) 00:08:36.206 27625.945 - 27827.594: 98.9583% ( 11) 00:08:36.206 27827.594 - 28029.243: 99.0476% ( 6) 00:08:36.206 35288.615 - 35490.265: 99.0625% ( 1) 00:08:36.206 35490.265 - 35691.914: 99.1964% ( 9) 00:08:36.206 35691.914 - 35893.563: 99.3452% ( 10) 00:08:36.206 35893.563 - 36095.212: 99.5089% ( 11) 00:08:36.206 36095.212 - 36296.862: 99.6726% ( 11) 00:08:36.206 36296.862 - 36498.511: 99.8065% ( 9) 00:08:36.206 36498.511 - 36700.160: 99.9702% ( 11) 00:08:36.206 36700.160 - 36901.809: 100.0000% ( 2) 00:08:36.206 00:08:36.206 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:36.206 ============================================================================== 00:08:36.206 Range in us Cumulative IO count 00:08:36.206 12250.191 - 12300.603: 0.0149% ( 1) 00:08:36.206 12300.603 - 12351.015: 0.0595% ( 3) 00:08:36.206 12351.015 - 12401.428: 0.0744% ( 1) 00:08:36.206 12401.428 - 12451.840: 0.1190% ( 3) 00:08:36.206 12451.840 - 12502.252: 0.1339% ( 1) 00:08:36.206 12502.252 - 12552.665: 0.1637% ( 2) 00:08:36.206 12552.665 - 12603.077: 0.1935% ( 2) 00:08:36.206 12603.077 - 12653.489: 0.2381% ( 3) 00:08:36.206 12703.902 - 12754.314: 0.2679% ( 2) 00:08:36.206 12754.314 - 12804.726: 0.3125% ( 3) 00:08:36.206 12804.726 - 12855.138: 0.3423% ( 2) 00:08:36.206 12855.138 - 12905.551: 0.3720% ( 2) 00:08:36.206 12905.551 - 13006.375: 0.4464% ( 5) 00:08:36.206 13006.375 - 13107.200: 0.4911% ( 3) 00:08:36.206 13107.200 - 13208.025: 0.5357% ( 3) 00:08:36.206 13208.025 - 13308.849: 0.5952% ( 4) 00:08:36.206 13308.849 - 13409.674: 0.6399% ( 3) 00:08:36.206 13409.674 - 13510.498: 0.7292% ( 6) 00:08:36.206 13510.498 - 13611.323: 0.7887% ( 4) 00:08:36.206 13611.323 - 13712.148: 0.9226% ( 9) 00:08:36.206 13712.148 - 13812.972: 1.0565% ( 9) 00:08:36.206 13812.972 - 13913.797: 1.1458% ( 6) 00:08:36.206 13913.797 - 14014.622: 1.2054% ( 4) 00:08:36.206 14014.622 - 14115.446: 1.3542% ( 10) 00:08:36.206 14115.446 - 14216.271: 1.4435% ( 6) 00:08:36.206 14216.271 - 14317.095: 1.6220% ( 12) 00:08:36.206 14317.095 - 14417.920: 1.7708% ( 10) 00:08:36.206 14417.920 - 14518.745: 1.9494% ( 12) 00:08:36.206 14518.745 - 14619.569: 2.0833% ( 9) 00:08:36.206 14619.569 - 14720.394: 2.2768% ( 13) 00:08:36.206 14720.394 - 14821.218: 2.5000% ( 15) 00:08:36.206 14821.218 - 14922.043: 2.7679% ( 18) 00:08:36.206 14922.043 - 15022.868: 2.9762% ( 14) 00:08:36.206 15022.868 - 15123.692: 3.3482% ( 25) 00:08:36.206 15123.692 - 15224.517: 3.7202% ( 25) 00:08:36.206 15224.517 - 15325.342: 4.0476% ( 22) 00:08:36.206 15325.342 - 15426.166: 4.7173% ( 45) 00:08:36.206 15426.166 - 15526.991: 5.2976% ( 39) 00:08:36.206 15526.991 - 15627.815: 5.9077% ( 41) 00:08:36.206 15627.815 - 15728.640: 6.7262% ( 55) 00:08:36.206 15728.640 - 15829.465: 7.4702% ( 50) 00:08:36.206 15829.465 - 15930.289: 8.2738% ( 54) 00:08:36.206 15930.289 - 16031.114: 9.3006% ( 69) 00:08:36.206 16031.114 - 16131.938: 10.2381% ( 63) 00:08:36.206 16131.938 - 16232.763: 11.2054% ( 65) 00:08:36.206 16232.763 - 16333.588: 12.3065% ( 74) 00:08:36.206 16333.588 - 16434.412: 13.2143% ( 61) 00:08:36.206 16434.412 - 16535.237: 14.1220% ( 61) 00:08:36.206 16535.237 - 16636.062: 15.2083% ( 73) 00:08:36.206 16636.062 - 16736.886: 16.2946% ( 73) 00:08:36.206 16736.886 - 16837.711: 17.5595% ( 85) 00:08:36.206 16837.711 - 16938.535: 18.7202% ( 78) 00:08:36.206 16938.535 - 17039.360: 19.9256% ( 81) 00:08:36.206 17039.360 - 17140.185: 21.1905% ( 85) 00:08:36.206 17140.185 - 17241.009: 22.4405% ( 84) 00:08:36.206 17241.009 - 17341.834: 23.5119% ( 72) 00:08:36.206 17341.834 - 17442.658: 24.7619% ( 84) 00:08:36.206 17442.658 - 17543.483: 26.1012% ( 90) 00:08:36.206 17543.483 - 17644.308: 27.6786% ( 106) 00:08:36.206 17644.308 - 17745.132: 29.1220% ( 97) 00:08:36.206 17745.132 - 17845.957: 30.7292% ( 108) 00:08:36.206 17845.957 - 17946.782: 32.3363% ( 108) 00:08:36.206 17946.782 - 18047.606: 33.9137% ( 106) 00:08:36.206 18047.606 - 18148.431: 35.2827% ( 92) 00:08:36.206 18148.431 - 18249.255: 36.6964% ( 95) 00:08:36.206 18249.255 - 18350.080: 38.0952% ( 94) 00:08:36.206 18350.080 - 18450.905: 39.4494% ( 91) 00:08:36.206 18450.905 - 18551.729: 40.7887% ( 90) 00:08:36.206 18551.729 - 18652.554: 42.4256% ( 110) 00:08:36.206 18652.554 - 18753.378: 43.9137% ( 100) 00:08:36.207 18753.378 - 18854.203: 45.4018% ( 100) 00:08:36.207 18854.203 - 18955.028: 46.9792% ( 106) 00:08:36.207 18955.028 - 19055.852: 48.2738% ( 87) 00:08:36.207 19055.852 - 19156.677: 49.7321% ( 98) 00:08:36.207 19156.677 - 19257.502: 51.4732% ( 117) 00:08:36.207 19257.502 - 19358.326: 53.1994% ( 116) 00:08:36.207 19358.326 - 19459.151: 54.8810% ( 113) 00:08:36.207 19459.151 - 19559.975: 56.6369% ( 118) 00:08:36.207 19559.975 - 19660.800: 58.4970% ( 125) 00:08:36.207 19660.800 - 19761.625: 60.2976% ( 121) 00:08:36.207 19761.625 - 19862.449: 62.0833% ( 120) 00:08:36.207 19862.449 - 19963.274: 63.8988% ( 122) 00:08:36.207 19963.274 - 20064.098: 65.6845% ( 120) 00:08:36.207 20064.098 - 20164.923: 67.6786% ( 134) 00:08:36.207 20164.923 - 20265.748: 69.6577% ( 133) 00:08:36.207 20265.748 - 20366.572: 71.3095% ( 111) 00:08:36.207 20366.572 - 20467.397: 73.0952% ( 120) 00:08:36.207 20467.397 - 20568.222: 74.6429% ( 104) 00:08:36.207 20568.222 - 20669.046: 76.2054% ( 105) 00:08:36.207 20669.046 - 20769.871: 77.7381% ( 103) 00:08:36.207 20769.871 - 20870.695: 79.1667% ( 96) 00:08:36.207 20870.695 - 20971.520: 80.5804% ( 95) 00:08:36.207 20971.520 - 21072.345: 81.6518% ( 72) 00:08:36.207 21072.345 - 21173.169: 82.6190% ( 65) 00:08:36.207 21173.169 - 21273.994: 83.7054% ( 73) 00:08:36.207 21273.994 - 21374.818: 84.7619% ( 71) 00:08:36.207 21374.818 - 21475.643: 85.5804% ( 55) 00:08:36.207 21475.643 - 21576.468: 86.5030% ( 62) 00:08:36.207 21576.468 - 21677.292: 87.2917% ( 53) 00:08:36.207 21677.292 - 21778.117: 88.1548% ( 58) 00:08:36.207 21778.117 - 21878.942: 89.0327% ( 59) 00:08:36.207 21878.942 - 21979.766: 89.9107% ( 59) 00:08:36.207 21979.766 - 22080.591: 90.5060% ( 40) 00:08:36.207 22080.591 - 22181.415: 91.1756% ( 45) 00:08:36.207 22181.415 - 22282.240: 91.6369% ( 31) 00:08:36.207 22282.240 - 22383.065: 92.2619% ( 42) 00:08:36.207 22383.065 - 22483.889: 92.7381% ( 32) 00:08:36.207 22483.889 - 22584.714: 93.1548% ( 28) 00:08:36.207 22584.714 - 22685.538: 93.6310% ( 32) 00:08:36.207 22685.538 - 22786.363: 94.0476% ( 28) 00:08:36.207 22786.363 - 22887.188: 94.4940% ( 30) 00:08:36.207 22887.188 - 22988.012: 94.7917% ( 20) 00:08:36.207 22988.012 - 23088.837: 95.2232% ( 29) 00:08:36.207 23088.837 - 23189.662: 95.5208% ( 20) 00:08:36.207 23189.662 - 23290.486: 95.8185% ( 20) 00:08:36.207 23290.486 - 23391.311: 96.0863% ( 18) 00:08:36.207 23391.311 - 23492.135: 96.4286% ( 23) 00:08:36.207 23492.135 - 23592.960: 96.6667% ( 16) 00:08:36.207 23592.960 - 23693.785: 96.9196% ( 17) 00:08:36.207 23693.785 - 23794.609: 97.2470% ( 22) 00:08:36.207 23794.609 - 23895.434: 97.3958% ( 10) 00:08:36.207 23895.434 - 23996.258: 97.5149% ( 8) 00:08:36.207 23996.258 - 24097.083: 97.6042% ( 6) 00:08:36.207 24097.083 - 24197.908: 97.6339% ( 2) 00:08:36.207 24197.908 - 24298.732: 97.6637% ( 2) 00:08:36.207 24298.732 - 24399.557: 97.7679% ( 7) 00:08:36.207 24399.557 - 24500.382: 97.7976% ( 2) 00:08:36.207 24500.382 - 24601.206: 97.8571% ( 4) 00:08:36.207 24601.206 - 24702.031: 97.9018% ( 3) 00:08:36.207 24702.031 - 24802.855: 97.9464% ( 3) 00:08:36.207 24802.855 - 24903.680: 98.0357% ( 6) 00:08:36.207 24903.680 - 25004.505: 98.0506% ( 1) 00:08:36.207 25004.505 - 25105.329: 98.0952% ( 3) 00:08:36.207 26214.400 - 26416.049: 98.1845% ( 6) 00:08:36.207 26416.049 - 26617.698: 98.3185% ( 9) 00:08:36.207 26617.698 - 26819.348: 98.4524% ( 9) 00:08:36.207 26819.348 - 27020.997: 98.5863% ( 9) 00:08:36.207 27020.997 - 27222.646: 98.7202% ( 9) 00:08:36.207 27222.646 - 27424.295: 98.8542% ( 9) 00:08:36.207 27424.295 - 27625.945: 99.0030% ( 10) 00:08:36.207 27625.945 - 27827.594: 99.0476% ( 3) 00:08:36.207 35086.966 - 35288.615: 99.0625% ( 1) 00:08:36.207 35288.615 - 35490.265: 99.1220% ( 4) 00:08:36.207 35490.265 - 35691.914: 99.2411% ( 8) 00:08:36.207 35691.914 - 35893.563: 99.3006% ( 4) 00:08:36.207 35893.563 - 36095.212: 99.3899% ( 6) 00:08:36.207 36095.212 - 36296.862: 99.4643% ( 5) 00:08:36.207 36296.862 - 36498.511: 99.5685% ( 7) 00:08:36.207 36498.511 - 36700.160: 99.6429% ( 5) 00:08:36.207 36700.160 - 36901.809: 99.7173% ( 5) 00:08:36.207 36901.809 - 37103.458: 99.8214% ( 7) 00:08:36.207 37103.458 - 37305.108: 99.9107% ( 6) 00:08:36.207 37305.108 - 37506.757: 99.9851% ( 5) 00:08:36.207 37506.757 - 37708.406: 100.0000% ( 1) 00:08:36.207 00:08:36.207 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:36.207 ============================================================================== 00:08:36.207 Range in us Cumulative IO count 00:08:36.207 11241.945 - 11292.357: 0.1042% ( 7) 00:08:36.207 11292.357 - 11342.769: 0.1488% ( 3) 00:08:36.207 11342.769 - 11393.182: 0.1786% ( 2) 00:08:36.207 11393.182 - 11443.594: 0.2381% ( 4) 00:08:36.207 11443.594 - 11494.006: 0.2976% ( 4) 00:08:36.207 11494.006 - 11544.418: 0.3274% ( 2) 00:08:36.207 11544.418 - 11594.831: 0.3720% ( 3) 00:08:36.207 11594.831 - 11645.243: 0.4167% ( 3) 00:08:36.207 11645.243 - 11695.655: 0.4613% ( 3) 00:08:36.207 11695.655 - 11746.068: 0.5060% ( 3) 00:08:36.207 11746.068 - 11796.480: 0.5506% ( 3) 00:08:36.207 11796.480 - 11846.892: 0.5804% ( 2) 00:08:36.207 11846.892 - 11897.305: 0.6250% ( 3) 00:08:36.207 11897.305 - 11947.717: 0.6696% ( 3) 00:08:36.207 11947.717 - 11998.129: 0.7143% ( 3) 00:08:36.207 11998.129 - 12048.542: 0.7738% ( 4) 00:08:36.207 12048.542 - 12098.954: 0.8185% ( 3) 00:08:36.207 12098.954 - 12149.366: 0.8631% ( 3) 00:08:36.207 12149.366 - 12199.778: 0.8929% ( 2) 00:08:36.207 12199.778 - 12250.191: 0.9375% ( 3) 00:08:36.207 12250.191 - 12300.603: 0.9524% ( 1) 00:08:36.207 13107.200 - 13208.025: 1.0119% ( 4) 00:08:36.207 13208.025 - 13308.849: 1.0863% ( 5) 00:08:36.207 13308.849 - 13409.674: 1.1458% ( 4) 00:08:36.207 13409.674 - 13510.498: 1.2351% ( 6) 00:08:36.207 13510.498 - 13611.323: 1.2946% ( 4) 00:08:36.207 13611.323 - 13712.148: 1.3690% ( 5) 00:08:36.207 13712.148 - 13812.972: 1.4137% ( 3) 00:08:36.207 13812.972 - 13913.797: 1.4732% ( 4) 00:08:36.207 13913.797 - 14014.622: 1.5179% ( 3) 00:08:36.207 14014.622 - 14115.446: 1.5625% ( 3) 00:08:36.207 14115.446 - 14216.271: 1.5923% ( 2) 00:08:36.207 14216.271 - 14317.095: 1.6964% ( 7) 00:08:36.207 14317.095 - 14417.920: 1.9048% ( 14) 00:08:36.207 14417.920 - 14518.745: 2.0833% ( 12) 00:08:36.207 14518.745 - 14619.569: 2.2768% ( 13) 00:08:36.207 14619.569 - 14720.394: 2.4554% ( 12) 00:08:36.207 14720.394 - 14821.218: 2.6935% ( 16) 00:08:36.207 14821.218 - 14922.043: 3.0060% ( 21) 00:08:36.207 14922.043 - 15022.868: 3.3631% ( 24) 00:08:36.207 15022.868 - 15123.692: 3.6458% ( 19) 00:08:36.207 15123.692 - 15224.517: 4.0327% ( 26) 00:08:36.207 15224.517 - 15325.342: 4.4345% ( 27) 00:08:36.207 15325.342 - 15426.166: 4.8958% ( 31) 00:08:36.207 15426.166 - 15526.991: 5.4018% ( 34) 00:08:36.207 15526.991 - 15627.815: 6.0565% ( 44) 00:08:36.207 15627.815 - 15728.640: 6.5923% ( 36) 00:08:36.207 15728.640 - 15829.465: 7.1280% ( 36) 00:08:36.207 15829.465 - 15930.289: 7.6190% ( 33) 00:08:36.207 15930.289 - 16031.114: 8.2589% ( 43) 00:08:36.207 16031.114 - 16131.938: 9.1220% ( 58) 00:08:36.207 16131.938 - 16232.763: 9.9851% ( 58) 00:08:36.207 16232.763 - 16333.588: 11.0863% ( 74) 00:08:36.207 16333.588 - 16434.412: 12.1726% ( 73) 00:08:36.207 16434.412 - 16535.237: 13.1399% ( 65) 00:08:36.207 16535.237 - 16636.062: 14.2113% ( 72) 00:08:36.207 16636.062 - 16736.886: 15.3274% ( 75) 00:08:36.207 16736.886 - 16837.711: 16.5327% ( 81) 00:08:36.207 16837.711 - 16938.535: 17.7976% ( 85) 00:08:36.207 16938.535 - 17039.360: 19.0327% ( 83) 00:08:36.207 17039.360 - 17140.185: 20.2976% ( 85) 00:08:36.207 17140.185 - 17241.009: 21.6815% ( 93) 00:08:36.207 17241.009 - 17341.834: 23.0804% ( 94) 00:08:36.207 17341.834 - 17442.658: 24.5089% ( 96) 00:08:36.207 17442.658 - 17543.483: 25.9077% ( 94) 00:08:36.207 17543.483 - 17644.308: 27.2321% ( 89) 00:08:36.207 17644.308 - 17745.132: 28.6161% ( 93) 00:08:36.207 17745.132 - 17845.957: 30.1042% ( 100) 00:08:36.207 17845.957 - 17946.782: 31.7857% ( 113) 00:08:36.207 17946.782 - 18047.606: 33.4524% ( 112) 00:08:36.207 18047.606 - 18148.431: 35.0446% ( 107) 00:08:36.207 18148.431 - 18249.255: 36.6369% ( 107) 00:08:36.207 18249.255 - 18350.080: 38.0952% ( 98) 00:08:36.207 18350.080 - 18450.905: 39.5685% ( 99) 00:08:36.207 18450.905 - 18551.729: 41.1458% ( 106) 00:08:36.207 18551.729 - 18652.554: 42.7083% ( 105) 00:08:36.207 18652.554 - 18753.378: 44.2262% ( 102) 00:08:36.207 18753.378 - 18854.203: 45.5952% ( 92) 00:08:36.207 18854.203 - 18955.028: 46.8601% ( 85) 00:08:36.207 18955.028 - 19055.852: 48.0060% ( 77) 00:08:36.207 19055.852 - 19156.677: 49.4494% ( 97) 00:08:36.207 19156.677 - 19257.502: 50.9077% ( 98) 00:08:36.207 19257.502 - 19358.326: 52.5298% ( 109) 00:08:36.207 19358.326 - 19459.151: 54.2560% ( 116) 00:08:36.207 19459.151 - 19559.975: 56.2202% ( 132) 00:08:36.207 19559.975 - 19660.800: 57.9613% ( 117) 00:08:36.207 19660.800 - 19761.625: 59.8661% ( 128) 00:08:36.207 19761.625 - 19862.449: 61.9048% ( 137) 00:08:36.207 19862.449 - 19963.274: 63.9881% ( 140) 00:08:36.207 19963.274 - 20064.098: 66.0268% ( 137) 00:08:36.207 20064.098 - 20164.923: 67.9464% ( 129) 00:08:36.207 20164.923 - 20265.748: 69.8810% ( 130) 00:08:36.207 20265.748 - 20366.572: 71.6518% ( 119) 00:08:36.207 20366.572 - 20467.397: 73.3780% ( 116) 00:08:36.207 20467.397 - 20568.222: 75.0149% ( 110) 00:08:36.207 20568.222 - 20669.046: 76.5774% ( 105) 00:08:36.207 20669.046 - 20769.871: 78.2589% ( 113) 00:08:36.207 20769.871 - 20870.695: 79.7024% ( 97) 00:08:36.207 20870.695 - 20971.520: 81.2202% ( 102) 00:08:36.207 20971.520 - 21072.345: 82.3661% ( 77) 00:08:36.207 21072.345 - 21173.169: 83.5268% ( 78) 00:08:36.207 21173.169 - 21273.994: 84.6131% ( 73) 00:08:36.207 21273.994 - 21374.818: 85.6399% ( 69) 00:08:36.207 21374.818 - 21475.643: 86.5625% ( 62) 00:08:36.208 21475.643 - 21576.468: 87.4851% ( 62) 00:08:36.208 21576.468 - 21677.292: 88.2738% ( 53) 00:08:36.208 21677.292 - 21778.117: 88.9583% ( 46) 00:08:36.208 21778.117 - 21878.942: 89.5089% ( 37) 00:08:36.208 21878.942 - 21979.766: 90.0000% ( 33) 00:08:36.208 21979.766 - 22080.591: 90.3423% ( 23) 00:08:36.208 22080.591 - 22181.415: 90.7292% ( 26) 00:08:36.208 22181.415 - 22282.240: 91.1607% ( 29) 00:08:36.208 22282.240 - 22383.065: 91.6518% ( 33) 00:08:36.208 22383.065 - 22483.889: 92.2173% ( 38) 00:08:36.208 22483.889 - 22584.714: 92.7530% ( 36) 00:08:36.208 22584.714 - 22685.538: 93.3482% ( 40) 00:08:36.208 22685.538 - 22786.363: 93.8095% ( 31) 00:08:36.208 22786.363 - 22887.188: 94.2857% ( 32) 00:08:36.208 22887.188 - 22988.012: 94.7470% ( 31) 00:08:36.208 22988.012 - 23088.837: 95.1935% ( 30) 00:08:36.208 23088.837 - 23189.662: 95.6994% ( 34) 00:08:36.208 23189.662 - 23290.486: 96.2054% ( 34) 00:08:36.208 23290.486 - 23391.311: 96.6220% ( 28) 00:08:36.208 23391.311 - 23492.135: 96.9792% ( 24) 00:08:36.208 23492.135 - 23592.960: 97.2917% ( 21) 00:08:36.208 23592.960 - 23693.785: 97.4405% ( 10) 00:08:36.208 23693.785 - 23794.609: 97.5595% ( 8) 00:08:36.208 23794.609 - 23895.434: 97.6339% ( 5) 00:08:36.208 23895.434 - 23996.258: 97.6935% ( 4) 00:08:36.208 23996.258 - 24097.083: 97.7530% ( 4) 00:08:36.208 24097.083 - 24197.908: 97.8125% ( 4) 00:08:36.208 24197.908 - 24298.732: 97.8720% ( 4) 00:08:36.208 24298.732 - 24399.557: 97.9315% ( 4) 00:08:36.208 24399.557 - 24500.382: 97.9911% ( 4) 00:08:36.208 24500.382 - 24601.206: 98.0506% ( 4) 00:08:36.208 24601.206 - 24702.031: 98.0952% ( 3) 00:08:36.208 25811.102 - 26012.751: 98.1548% ( 4) 00:08:36.208 26012.751 - 26214.400: 98.3185% ( 11) 00:08:36.208 26214.400 - 26416.049: 98.4524% ( 9) 00:08:36.208 26416.049 - 26617.698: 98.6161% ( 11) 00:08:36.208 26617.698 - 26819.348: 98.7798% ( 11) 00:08:36.208 26819.348 - 27020.997: 98.9286% ( 10) 00:08:36.208 27020.997 - 27222.646: 99.0476% ( 8) 00:08:36.208 35288.615 - 35490.265: 99.1071% ( 4) 00:08:36.208 35490.265 - 35691.914: 99.1964% ( 6) 00:08:36.208 35691.914 - 35893.563: 99.3006% ( 7) 00:08:36.208 35893.563 - 36095.212: 99.4048% ( 7) 00:08:36.208 36095.212 - 36296.862: 99.4792% ( 5) 00:08:36.208 36296.862 - 36498.511: 99.5536% ( 5) 00:08:36.208 36498.511 - 36700.160: 99.6577% ( 7) 00:08:36.208 36700.160 - 36901.809: 99.7619% ( 7) 00:08:36.208 36901.809 - 37103.458: 99.8363% ( 5) 00:08:36.208 37103.458 - 37305.108: 99.9405% ( 7) 00:08:36.208 37305.108 - 37506.757: 100.0000% ( 4) 00:08:36.208 00:08:36.208 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:36.208 ============================================================================== 00:08:36.208 Range in us Cumulative IO count 00:08:36.208 8620.505 - 8670.917: 0.0446% ( 3) 00:08:36.208 8670.917 - 8721.329: 0.1042% ( 4) 00:08:36.208 8721.329 - 8771.742: 0.1488% ( 3) 00:08:36.208 8771.742 - 8822.154: 0.1935% ( 3) 00:08:36.208 8822.154 - 8872.566: 0.2232% ( 2) 00:08:36.208 8872.566 - 8922.978: 0.2679% ( 3) 00:08:36.208 8922.978 - 8973.391: 0.3125% ( 3) 00:08:36.208 8973.391 - 9023.803: 0.3423% ( 2) 00:08:36.208 9023.803 - 9074.215: 0.3869% ( 3) 00:08:36.208 9074.215 - 9124.628: 0.4167% ( 2) 00:08:36.208 9124.628 - 9175.040: 0.4613% ( 3) 00:08:36.208 9175.040 - 9225.452: 0.5060% ( 3) 00:08:36.208 9225.452 - 9275.865: 0.5506% ( 3) 00:08:36.208 9275.865 - 9326.277: 0.5952% ( 3) 00:08:36.208 9326.277 - 9376.689: 0.6399% ( 3) 00:08:36.208 9376.689 - 9427.102: 0.6845% ( 3) 00:08:36.208 9427.102 - 9477.514: 0.7440% ( 4) 00:08:36.208 9477.514 - 9527.926: 0.7738% ( 2) 00:08:36.208 9527.926 - 9578.338: 0.8185% ( 3) 00:08:36.208 9578.338 - 9628.751: 0.8631% ( 3) 00:08:36.208 9628.751 - 9679.163: 0.8929% ( 2) 00:08:36.208 9679.163 - 9729.575: 0.9375% ( 3) 00:08:36.208 9729.575 - 9779.988: 0.9524% ( 1) 00:08:36.208 13712.148 - 13812.972: 0.9673% ( 1) 00:08:36.208 13812.972 - 13913.797: 1.0417% ( 5) 00:08:36.208 13913.797 - 14014.622: 1.1012% ( 4) 00:08:36.208 14014.622 - 14115.446: 1.1607% ( 4) 00:08:36.208 14115.446 - 14216.271: 1.2798% ( 8) 00:08:36.208 14216.271 - 14317.095: 1.4732% ( 13) 00:08:36.208 14317.095 - 14417.920: 1.7411% ( 18) 00:08:36.208 14417.920 - 14518.745: 2.0089% ( 18) 00:08:36.208 14518.745 - 14619.569: 2.2470% ( 16) 00:08:36.208 14619.569 - 14720.394: 2.5000% ( 17) 00:08:36.208 14720.394 - 14821.218: 2.7381% ( 16) 00:08:36.208 14821.218 - 14922.043: 2.9464% ( 14) 00:08:36.208 14922.043 - 15022.868: 3.2440% ( 20) 00:08:36.208 15022.868 - 15123.692: 3.5714% ( 22) 00:08:36.208 15123.692 - 15224.517: 3.9286% ( 24) 00:08:36.208 15224.517 - 15325.342: 4.3899% ( 31) 00:08:36.208 15325.342 - 15426.166: 5.0149% ( 42) 00:08:36.208 15426.166 - 15526.991: 5.6696% ( 44) 00:08:36.208 15526.991 - 15627.815: 6.2798% ( 41) 00:08:36.208 15627.815 - 15728.640: 6.8155% ( 36) 00:08:36.208 15728.640 - 15829.465: 7.5000% ( 46) 00:08:36.208 15829.465 - 15930.289: 8.1250% ( 42) 00:08:36.208 15930.289 - 16031.114: 8.9137% ( 53) 00:08:36.208 16031.114 - 16131.938: 9.6726% ( 51) 00:08:36.208 16131.938 - 16232.763: 10.5208% ( 57) 00:08:36.208 16232.763 - 16333.588: 11.4732% ( 64) 00:08:36.208 16333.588 - 16434.412: 12.5446% ( 72) 00:08:36.208 16434.412 - 16535.237: 13.6756% ( 76) 00:08:36.208 16535.237 - 16636.062: 14.8363% ( 78) 00:08:36.208 16636.062 - 16736.886: 16.2351% ( 94) 00:08:36.208 16736.886 - 16837.711: 17.5595% ( 89) 00:08:36.208 16837.711 - 16938.535: 18.9137% ( 91) 00:08:36.208 16938.535 - 17039.360: 20.3125% ( 94) 00:08:36.208 17039.360 - 17140.185: 21.7262% ( 95) 00:08:36.208 17140.185 - 17241.009: 23.1994% ( 99) 00:08:36.208 17241.009 - 17341.834: 24.7619% ( 105) 00:08:36.208 17341.834 - 17442.658: 26.4583% ( 114) 00:08:36.208 17442.658 - 17543.483: 28.0952% ( 110) 00:08:36.208 17543.483 - 17644.308: 29.7173% ( 109) 00:08:36.208 17644.308 - 17745.132: 31.1756% ( 98) 00:08:36.208 17745.132 - 17845.957: 32.6935% ( 102) 00:08:36.208 17845.957 - 17946.782: 34.1815% ( 100) 00:08:36.208 17946.782 - 18047.606: 35.5506% ( 92) 00:08:36.208 18047.606 - 18148.431: 36.7262% ( 79) 00:08:36.208 18148.431 - 18249.255: 37.6488% ( 62) 00:08:36.208 18249.255 - 18350.080: 38.6161% ( 65) 00:08:36.208 18350.080 - 18450.905: 39.6726% ( 71) 00:08:36.208 18450.905 - 18551.729: 40.6696% ( 67) 00:08:36.208 18551.729 - 18652.554: 41.8304% ( 78) 00:08:36.208 18652.554 - 18753.378: 43.0952% ( 85) 00:08:36.208 18753.378 - 18854.203: 44.5833% ( 100) 00:08:36.208 18854.203 - 18955.028: 45.8780% ( 87) 00:08:36.208 18955.028 - 19055.852: 47.4256% ( 104) 00:08:36.208 19055.852 - 19156.677: 48.9286% ( 101) 00:08:36.208 19156.677 - 19257.502: 50.4613% ( 103) 00:08:36.208 19257.502 - 19358.326: 52.0387% ( 106) 00:08:36.208 19358.326 - 19459.151: 53.6458% ( 108) 00:08:36.208 19459.151 - 19559.975: 55.4762% ( 123) 00:08:36.208 19559.975 - 19660.800: 57.3363% ( 125) 00:08:36.208 19660.800 - 19761.625: 59.2262% ( 127) 00:08:36.208 19761.625 - 19862.449: 61.0863% ( 125) 00:08:36.208 19862.449 - 19963.274: 63.1101% ( 136) 00:08:36.208 19963.274 - 20064.098: 65.2232% ( 142) 00:08:36.208 20064.098 - 20164.923: 67.4851% ( 152) 00:08:36.208 20164.923 - 20265.748: 69.5387% ( 138) 00:08:36.208 20265.748 - 20366.572: 71.6071% ( 139) 00:08:36.208 20366.572 - 20467.397: 73.5119% ( 128) 00:08:36.208 20467.397 - 20568.222: 75.2083% ( 114) 00:08:36.208 20568.222 - 20669.046: 76.8452% ( 110) 00:08:36.208 20669.046 - 20769.871: 78.3631% ( 102) 00:08:36.208 20769.871 - 20870.695: 79.8214% ( 98) 00:08:36.208 20870.695 - 20971.520: 81.2649% ( 97) 00:08:36.208 20971.520 - 21072.345: 82.6488% ( 93) 00:08:36.208 21072.345 - 21173.169: 83.8244% ( 79) 00:08:36.208 21173.169 - 21273.994: 85.0298% ( 81) 00:08:36.208 21273.994 - 21374.818: 85.9077% ( 59) 00:08:36.208 21374.818 - 21475.643: 86.6369% ( 49) 00:08:36.208 21475.643 - 21576.468: 87.3214% ( 46) 00:08:36.208 21576.468 - 21677.292: 88.0060% ( 46) 00:08:36.208 21677.292 - 21778.117: 88.4673% ( 31) 00:08:36.208 21778.117 - 21878.942: 88.9583% ( 33) 00:08:36.208 21878.942 - 21979.766: 89.4940% ( 36) 00:08:36.208 21979.766 - 22080.591: 90.1042% ( 41) 00:08:36.208 22080.591 - 22181.415: 90.6250% ( 35) 00:08:36.208 22181.415 - 22282.240: 91.1458% ( 35) 00:08:36.208 22282.240 - 22383.065: 91.7262% ( 39) 00:08:36.208 22383.065 - 22483.889: 92.2619% ( 36) 00:08:36.208 22483.889 - 22584.714: 92.7827% ( 35) 00:08:36.208 22584.714 - 22685.538: 93.2440% ( 31) 00:08:36.208 22685.538 - 22786.363: 93.6905% ( 30) 00:08:36.208 22786.363 - 22887.188: 94.1220% ( 29) 00:08:36.208 22887.188 - 22988.012: 94.6280% ( 34) 00:08:36.208 22988.012 - 23088.837: 95.0298% ( 27) 00:08:36.208 23088.837 - 23189.662: 95.3571% ( 22) 00:08:36.208 23189.662 - 23290.486: 95.6399% ( 19) 00:08:36.208 23290.486 - 23391.311: 95.9524% ( 21) 00:08:36.208 23391.311 - 23492.135: 96.2500% ( 20) 00:08:36.208 23492.135 - 23592.960: 96.5030% ( 17) 00:08:36.208 23592.960 - 23693.785: 96.7262% ( 15) 00:08:36.208 23693.785 - 23794.609: 96.9345% ( 14) 00:08:36.208 23794.609 - 23895.434: 97.0833% ( 10) 00:08:36.208 23895.434 - 23996.258: 97.2173% ( 9) 00:08:36.208 23996.258 - 24097.083: 97.3810% ( 11) 00:08:36.208 24097.083 - 24197.908: 97.5298% ( 10) 00:08:36.208 24197.908 - 24298.732: 97.6935% ( 11) 00:08:36.208 24298.732 - 24399.557: 97.8125% ( 8) 00:08:36.208 24399.557 - 24500.382: 97.8720% ( 4) 00:08:36.208 24500.382 - 24601.206: 97.9167% ( 3) 00:08:36.208 24601.206 - 24702.031: 97.9613% ( 3) 00:08:36.208 24702.031 - 24802.855: 98.0208% ( 4) 00:08:36.208 24802.855 - 24903.680: 98.0804% ( 4) 00:08:36.208 24903.680 - 25004.505: 98.0952% ( 1) 00:08:36.208 26819.348 - 27020.997: 98.2292% ( 9) 00:08:36.208 27020.997 - 27222.646: 98.3929% ( 11) 00:08:36.209 27222.646 - 27424.295: 98.5565% ( 11) 00:08:36.209 27424.295 - 27625.945: 98.7054% ( 10) 00:08:36.209 27625.945 - 27827.594: 98.8244% ( 8) 00:08:36.209 27827.594 - 28029.243: 98.9881% ( 11) 00:08:36.209 28029.243 - 28230.892: 99.0476% ( 4) 00:08:36.209 35893.563 - 36095.212: 99.0625% ( 1) 00:08:36.209 36095.212 - 36296.862: 99.1518% ( 6) 00:08:36.209 36296.862 - 36498.511: 99.2262% ( 5) 00:08:36.209 36498.511 - 36700.160: 99.3155% ( 6) 00:08:36.209 36700.160 - 36901.809: 99.4048% ( 6) 00:08:36.209 36901.809 - 37103.458: 99.5238% ( 8) 00:08:36.209 37103.458 - 37305.108: 99.6280% ( 7) 00:08:36.209 37305.108 - 37506.757: 99.7321% ( 7) 00:08:36.209 37506.757 - 37708.406: 99.8363% ( 7) 00:08:36.209 37708.406 - 37910.055: 99.9554% ( 8) 00:08:36.209 37910.055 - 38111.705: 100.0000% ( 3) 00:08:36.209 00:08:36.209 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:36.209 ============================================================================== 00:08:36.209 Range in us Cumulative IO count 00:08:36.209 7713.083 - 7763.495: 0.0442% ( 3) 00:08:36.209 7763.495 - 7813.908: 0.0884% ( 3) 00:08:36.209 7813.908 - 7864.320: 0.1327% ( 3) 00:08:36.209 7864.320 - 7914.732: 0.1769% ( 3) 00:08:36.209 7914.732 - 7965.145: 0.2211% ( 3) 00:08:36.209 7965.145 - 8015.557: 0.2948% ( 5) 00:08:36.209 8015.557 - 8065.969: 0.3538% ( 4) 00:08:36.209 8065.969 - 8116.382: 0.3980% ( 3) 00:08:36.209 8116.382 - 8166.794: 0.4422% ( 3) 00:08:36.209 8166.794 - 8217.206: 0.4864% ( 3) 00:08:36.209 8217.206 - 8267.618: 0.5307% ( 3) 00:08:36.209 8267.618 - 8318.031: 0.5749% ( 3) 00:08:36.209 8318.031 - 8368.443: 0.6044% ( 2) 00:08:36.209 8368.443 - 8418.855: 0.6486% ( 3) 00:08:36.209 8418.855 - 8469.268: 0.6928% ( 3) 00:08:36.209 8469.268 - 8519.680: 0.7370% ( 3) 00:08:36.209 8519.680 - 8570.092: 0.7812% ( 3) 00:08:36.209 8570.092 - 8620.505: 0.8255% ( 3) 00:08:36.209 8620.505 - 8670.917: 0.8550% ( 2) 00:08:36.209 8670.917 - 8721.329: 0.8992% ( 3) 00:08:36.209 8721.329 - 8771.742: 0.9287% ( 2) 00:08:36.209 8771.742 - 8822.154: 0.9434% ( 1) 00:08:36.209 14014.622 - 14115.446: 1.0024% ( 4) 00:08:36.209 14115.446 - 14216.271: 1.1645% ( 11) 00:08:36.209 14216.271 - 14317.095: 1.3414% ( 12) 00:08:36.209 14317.095 - 14417.920: 1.6067% ( 18) 00:08:36.209 14417.920 - 14518.745: 1.8721% ( 18) 00:08:36.209 14518.745 - 14619.569: 2.0932% ( 15) 00:08:36.209 14619.569 - 14720.394: 2.3585% ( 18) 00:08:36.209 14720.394 - 14821.218: 2.6091% ( 17) 00:08:36.209 14821.218 - 14922.043: 2.8744% ( 18) 00:08:36.209 14922.043 - 15022.868: 3.2429% ( 25) 00:08:36.209 15022.868 - 15123.692: 3.6262% ( 26) 00:08:36.209 15123.692 - 15224.517: 4.0979% ( 32) 00:08:36.209 15224.517 - 15325.342: 4.6875% ( 40) 00:08:36.209 15325.342 - 15426.166: 5.1445% ( 31) 00:08:36.209 15426.166 - 15526.991: 5.6751% ( 36) 00:08:36.209 15526.991 - 15627.815: 6.2500% ( 39) 00:08:36.209 15627.815 - 15728.640: 6.9281% ( 46) 00:08:36.209 15728.640 - 15829.465: 7.6356% ( 48) 00:08:36.209 15829.465 - 15930.289: 8.4316% ( 54) 00:08:36.209 15930.289 - 16031.114: 9.0802% ( 44) 00:08:36.209 16031.114 - 16131.938: 9.7877% ( 48) 00:08:36.209 16131.938 - 16232.763: 10.5985% ( 55) 00:08:36.209 16232.763 - 16333.588: 11.5566% ( 65) 00:08:36.209 16333.588 - 16434.412: 12.6474% ( 74) 00:08:36.209 16434.412 - 16535.237: 13.8561% ( 82) 00:08:36.209 16535.237 - 16636.062: 15.1091% ( 85) 00:08:36.209 16636.062 - 16736.886: 16.3325% ( 83) 00:08:36.209 16736.886 - 16837.711: 17.6739% ( 91) 00:08:36.209 16837.711 - 16938.535: 19.0596% ( 94) 00:08:36.209 16938.535 - 17039.360: 20.4452% ( 94) 00:08:36.209 17039.360 - 17140.185: 21.7276% ( 87) 00:08:36.209 17140.185 - 17241.009: 23.2164% ( 101) 00:08:36.209 17241.009 - 17341.834: 24.8526% ( 111) 00:08:36.209 17341.834 - 17442.658: 26.2235% ( 93) 00:08:36.209 17442.658 - 17543.483: 27.8597% ( 111) 00:08:36.209 17543.483 - 17644.308: 29.4074% ( 105) 00:08:36.209 17644.308 - 17745.132: 30.8667% ( 99) 00:08:36.209 17745.132 - 17845.957: 32.5472% ( 114) 00:08:36.209 17845.957 - 17946.782: 34.3160% ( 120) 00:08:36.209 17946.782 - 18047.606: 36.0112% ( 115) 00:08:36.209 18047.606 - 18148.431: 37.3379% ( 90) 00:08:36.209 18148.431 - 18249.255: 38.7677% ( 97) 00:08:36.209 18249.255 - 18350.080: 40.1975% ( 97) 00:08:36.209 18350.080 - 18450.905: 41.7011% ( 102) 00:08:36.209 18450.905 - 18551.729: 43.3520% ( 112) 00:08:36.209 18551.729 - 18652.554: 44.9735% ( 110) 00:08:36.209 18652.554 - 18753.378: 46.5065% ( 104) 00:08:36.209 18753.378 - 18854.203: 47.9511% ( 98) 00:08:36.209 18854.203 - 18955.028: 49.2925% ( 91) 00:08:36.209 18955.028 - 19055.852: 50.6044% ( 89) 00:08:36.209 19055.852 - 19156.677: 51.9163% ( 89) 00:08:36.209 19156.677 - 19257.502: 53.3461% ( 97) 00:08:36.209 19257.502 - 19358.326: 54.7612% ( 96) 00:08:36.209 19358.326 - 19459.151: 56.1173% ( 92) 00:08:36.209 19459.151 - 19559.975: 57.8125% ( 115) 00:08:36.209 19559.975 - 19660.800: 59.4634% ( 112) 00:08:36.209 19660.800 - 19761.625: 61.2176% ( 119) 00:08:36.209 19761.625 - 19862.449: 63.0307% ( 123) 00:08:36.209 19862.449 - 19963.274: 64.8143% ( 121) 00:08:36.209 19963.274 - 20064.098: 66.5831% ( 120) 00:08:36.209 20064.098 - 20164.923: 68.3962% ( 123) 00:08:36.209 20164.923 - 20265.748: 70.0472% ( 112) 00:08:36.209 20265.748 - 20366.572: 71.7571% ( 116) 00:08:36.209 20366.572 - 20467.397: 73.4375% ( 114) 00:08:36.209 20467.397 - 20568.222: 75.1327% ( 115) 00:08:36.209 20568.222 - 20669.046: 76.7099% ( 107) 00:08:36.209 20669.046 - 20769.871: 78.1840% ( 100) 00:08:36.209 20769.871 - 20870.695: 79.8496% ( 113) 00:08:36.209 20870.695 - 20971.520: 81.2647% ( 96) 00:08:36.209 20971.520 - 21072.345: 82.5472% ( 87) 00:08:36.209 21072.345 - 21173.169: 83.6085% ( 72) 00:08:36.209 21173.169 - 21273.994: 84.6846% ( 73) 00:08:36.209 21273.994 - 21374.818: 85.7164% ( 70) 00:08:36.209 21374.818 - 21475.643: 86.6745% ( 65) 00:08:36.209 21475.643 - 21576.468: 87.4853% ( 55) 00:08:36.209 21576.468 - 21677.292: 88.3550% ( 59) 00:08:36.209 21677.292 - 21778.117: 89.1067% ( 51) 00:08:36.209 21778.117 - 21878.942: 89.8732% ( 52) 00:08:36.209 21878.942 - 21979.766: 90.5366% ( 45) 00:08:36.209 21979.766 - 22080.591: 91.2441% ( 48) 00:08:36.209 22080.591 - 22181.415: 91.8927% ( 44) 00:08:36.209 22181.415 - 22282.240: 92.4233% ( 36) 00:08:36.209 22282.240 - 22383.065: 92.8656% ( 30) 00:08:36.209 22383.065 - 22483.889: 93.3078% ( 30) 00:08:36.209 22483.889 - 22584.714: 93.6763% ( 25) 00:08:36.209 22584.714 - 22685.538: 94.1775% ( 34) 00:08:36.209 22685.538 - 22786.363: 94.5607% ( 26) 00:08:36.209 22786.363 - 22887.188: 95.0177% ( 31) 00:08:36.209 22887.188 - 22988.012: 95.5041% ( 33) 00:08:36.209 22988.012 - 23088.837: 95.9463% ( 30) 00:08:36.209 23088.837 - 23189.662: 96.2706% ( 22) 00:08:36.209 23189.662 - 23290.486: 96.5360% ( 18) 00:08:36.209 23290.486 - 23391.311: 96.8013% ( 18) 00:08:36.209 23391.311 - 23492.135: 97.1108% ( 21) 00:08:36.209 23492.135 - 23592.960: 97.4057% ( 20) 00:08:36.209 23592.960 - 23693.785: 97.6415% ( 16) 00:08:36.209 23693.785 - 23794.609: 97.8774% ( 16) 00:08:36.209 23794.609 - 23895.434: 98.0542% ( 12) 00:08:36.209 23895.434 - 23996.258: 98.2017% ( 10) 00:08:36.209 23996.258 - 24097.083: 98.3785% ( 12) 00:08:36.209 24097.083 - 24197.908: 98.5259% ( 10) 00:08:36.209 24197.908 - 24298.732: 98.6291% ( 7) 00:08:36.209 24298.732 - 24399.557: 98.6881% ( 4) 00:08:36.209 24399.557 - 24500.382: 98.7323% ( 3) 00:08:36.209 24500.382 - 24601.206: 98.7913% ( 4) 00:08:36.209 24601.206 - 24702.031: 98.8208% ( 2) 00:08:36.209 24702.031 - 24802.855: 98.8797% ( 4) 00:08:36.209 24802.855 - 24903.680: 98.9239% ( 3) 00:08:36.209 24903.680 - 25004.505: 98.9829% ( 4) 00:08:36.209 25004.505 - 25105.329: 99.0271% ( 3) 00:08:36.209 25105.329 - 25206.154: 99.0566% ( 2) 00:08:36.209 27020.997 - 27222.646: 99.1008% ( 3) 00:08:36.209 27222.646 - 27424.295: 99.2630% ( 11) 00:08:36.209 27424.295 - 27625.945: 99.4104% ( 10) 00:08:36.209 27625.945 - 27827.594: 99.5283% ( 8) 00:08:36.209 27827.594 - 28029.243: 99.6757% ( 10) 00:08:36.209 28029.243 - 28230.892: 99.8231% ( 10) 00:08:36.210 28230.892 - 28432.542: 99.9853% ( 11) 00:08:36.210 28432.542 - 28634.191: 100.0000% ( 1) 00:08:36.210 00:08:36.210 21:50:20 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:37.598 Initializing NVMe Controllers 00:08:37.598 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.598 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.598 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.598 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.598 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:37.598 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:37.598 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:37.598 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:37.598 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:37.598 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:37.599 Initialization complete. Launching workers. 00:08:37.599 ======================================================== 00:08:37.599 Latency(us) 00:08:37.599 Device Information : IOPS MiB/s Average min max 00:08:37.599 PCIE (0000:00:11.0) NSID 1 from core 0: 6748.21 79.08 18986.28 13619.99 40535.83 00:08:37.599 PCIE (0000:00:13.0) NSID 1 from core 0: 6748.21 79.08 18963.31 12983.42 39213.94 00:08:37.599 PCIE (0000:00:10.0) NSID 1 from core 0: 6748.21 79.08 18942.59 12173.80 39474.26 00:08:37.599 PCIE (0000:00:12.0) NSID 1 from core 0: 6748.21 79.08 18921.03 11391.67 39066.63 00:08:37.599 PCIE (0000:00:12.0) NSID 2 from core 0: 6748.21 79.08 18900.78 8279.37 39970.20 00:08:37.599 PCIE (0000:00:12.0) NSID 3 from core 0: 6748.21 79.08 18880.40 6898.99 40459.57 00:08:37.599 ======================================================== 00:08:37.599 Total : 40489.29 474.48 18932.40 6898.99 40535.83 00:08:37.599 00:08:37.599 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:37.599 ================================================================================= 00:08:37.599 1.00000% : 14922.043us 00:08:37.599 10.00000% : 16535.237us 00:08:37.599 25.00000% : 17341.834us 00:08:37.599 50.00000% : 18854.203us 00:08:37.599 75.00000% : 19963.274us 00:08:37.599 90.00000% : 21273.994us 00:08:37.599 95.00000% : 22584.714us 00:08:37.599 98.00000% : 23895.434us 00:08:37.599 99.00000% : 28029.243us 00:08:37.599 99.50000% : 39523.249us 00:08:37.599 99.90000% : 40329.846us 00:08:37.599 99.99000% : 40733.145us 00:08:37.599 99.99900% : 40733.145us 00:08:37.599 99.99990% : 40733.145us 00:08:37.599 99.99999% : 40733.145us 00:08:37.599 00:08:37.599 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:37.599 ================================================================================= 00:08:37.599 1.00000% : 15325.342us 00:08:37.599 10.00000% : 16535.237us 00:08:37.599 25.00000% : 17442.658us 00:08:37.599 50.00000% : 18753.378us 00:08:37.599 75.00000% : 19862.449us 00:08:37.599 90.00000% : 21374.818us 00:08:37.599 95.00000% : 22786.363us 00:08:37.599 98.00000% : 23996.258us 00:08:37.599 99.00000% : 29037.489us 00:08:37.599 99.50000% : 38515.003us 00:08:37.599 99.90000% : 39119.951us 00:08:37.599 99.99000% : 39321.600us 00:08:37.599 99.99900% : 39321.600us 00:08:37.599 99.99990% : 39321.600us 00:08:37.599 99.99999% : 39321.600us 00:08:37.599 00:08:37.599 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:37.599 ================================================================================= 00:08:37.599 1.00000% : 14922.043us 00:08:37.599 10.00000% : 16535.237us 00:08:37.599 25.00000% : 17341.834us 00:08:37.599 50.00000% : 18652.554us 00:08:37.599 75.00000% : 19963.274us 00:08:37.599 90.00000% : 21576.468us 00:08:37.599 95.00000% : 22483.889us 00:08:37.599 98.00000% : 23996.258us 00:08:37.599 99.00000% : 29239.138us 00:08:37.599 99.50000% : 38716.652us 00:08:37.599 99.90000% : 39321.600us 00:08:37.599 99.99000% : 39523.249us 00:08:37.599 99.99900% : 39523.249us 00:08:37.599 99.99990% : 39523.249us 00:08:37.599 99.99999% : 39523.249us 00:08:37.599 00:08:37.599 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:37.599 ================================================================================= 00:08:37.599 1.00000% : 15224.517us 00:08:37.599 10.00000% : 16434.412us 00:08:37.599 25.00000% : 17341.834us 00:08:37.599 50.00000% : 18753.378us 00:08:37.599 75.00000% : 19963.274us 00:08:37.599 90.00000% : 21475.643us 00:08:37.599 95.00000% : 22383.065us 00:08:37.599 98.00000% : 23391.311us 00:08:37.599 99.00000% : 28634.191us 00:08:37.599 99.50000% : 38313.354us 00:08:37.599 99.90000% : 39119.951us 00:08:37.599 99.99000% : 39119.951us 00:08:37.599 99.99900% : 39119.951us 00:08:37.599 99.99990% : 39119.951us 00:08:37.599 99.99999% : 39119.951us 00:08:37.599 00:08:37.599 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:37.599 ================================================================================= 00:08:37.599 1.00000% : 14922.043us 00:08:37.599 10.00000% : 16333.588us 00:08:37.599 25.00000% : 17341.834us 00:08:37.599 50.00000% : 18753.378us 00:08:37.599 75.00000% : 19963.274us 00:08:37.599 90.00000% : 21576.468us 00:08:37.599 95.00000% : 22383.065us 00:08:37.599 98.00000% : 23592.960us 00:08:37.599 99.00000% : 29844.086us 00:08:37.599 99.50000% : 39321.600us 00:08:37.599 99.90000% : 39926.548us 00:08:37.599 99.99000% : 40128.197us 00:08:37.599 99.99900% : 40128.197us 00:08:37.599 99.99990% : 40128.197us 00:08:37.599 99.99999% : 40128.197us 00:08:37.599 00:08:37.599 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:37.599 ================================================================================= 00:08:37.599 1.00000% : 14216.271us 00:08:37.599 10.00000% : 16333.588us 00:08:37.599 25.00000% : 17341.834us 00:08:37.599 50.00000% : 18854.203us 00:08:37.599 75.00000% : 19862.449us 00:08:37.599 90.00000% : 21374.818us 00:08:37.599 95.00000% : 22685.538us 00:08:37.599 98.00000% : 23492.135us 00:08:37.599 99.00000% : 29239.138us 00:08:37.599 99.50000% : 39724.898us 00:08:37.599 99.90000% : 40329.846us 00:08:37.599 99.99000% : 40531.495us 00:08:37.599 99.99900% : 40531.495us 00:08:37.599 99.99990% : 40531.495us 00:08:37.599 99.99999% : 40531.495us 00:08:37.599 00:08:37.599 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:37.599 ============================================================================== 00:08:37.599 Range in us Cumulative IO count 00:08:37.599 13611.323 - 13712.148: 0.1474% ( 10) 00:08:37.599 13712.148 - 13812.972: 0.3390% ( 13) 00:08:37.599 13812.972 - 13913.797: 0.5749% ( 16) 00:08:37.599 13913.797 - 14014.622: 0.7518% ( 12) 00:08:37.599 14014.622 - 14115.446: 0.8844% ( 9) 00:08:37.599 14115.446 - 14216.271: 0.9434% ( 4) 00:08:37.599 14720.394 - 14821.218: 0.9729% ( 2) 00:08:37.599 14821.218 - 14922.043: 1.0318% ( 4) 00:08:37.599 14922.043 - 15022.868: 1.1645% ( 9) 00:08:37.599 15022.868 - 15123.692: 1.4298% ( 18) 00:08:37.599 15123.692 - 15224.517: 1.7394% ( 21) 00:08:37.599 15224.517 - 15325.342: 1.8721% ( 9) 00:08:37.599 15325.342 - 15426.166: 2.1079% ( 16) 00:08:37.599 15426.166 - 15526.991: 2.4764% ( 25) 00:08:37.599 15526.991 - 15627.815: 2.8449% ( 25) 00:08:37.599 15627.815 - 15728.640: 3.3608% ( 35) 00:08:37.599 15728.640 - 15829.465: 3.8178% ( 31) 00:08:37.599 15829.465 - 15930.289: 4.2600% ( 30) 00:08:37.599 15930.289 - 16031.114: 5.0265% ( 52) 00:08:37.599 16031.114 - 16131.938: 5.6751% ( 44) 00:08:37.599 16131.938 - 16232.763: 6.4416% ( 52) 00:08:37.599 16232.763 - 16333.588: 7.5029% ( 72) 00:08:37.599 16333.588 - 16434.412: 9.1686% ( 113) 00:08:37.599 16434.412 - 16535.237: 10.9817% ( 123) 00:08:37.599 16535.237 - 16636.062: 12.9570% ( 134) 00:08:37.599 16636.062 - 16736.886: 15.2270% ( 154) 00:08:37.599 16736.886 - 16837.711: 17.0843% ( 126) 00:08:37.599 16837.711 - 16938.535: 18.6321% ( 105) 00:08:37.599 16938.535 - 17039.360: 20.2241% ( 108) 00:08:37.599 17039.360 - 17140.185: 22.1551% ( 131) 00:08:37.599 17140.185 - 17241.009: 23.8650% ( 116) 00:08:37.599 17241.009 - 17341.834: 25.5454% ( 114) 00:08:37.599 17341.834 - 17442.658: 26.8721% ( 90) 00:08:37.599 17442.658 - 17543.483: 28.1545% ( 87) 00:08:37.599 17543.483 - 17644.308: 29.7022% ( 105) 00:08:37.599 17644.308 - 17745.132: 31.0731% ( 93) 00:08:37.599 17745.132 - 17845.957: 32.6504% ( 107) 00:08:37.599 17845.957 - 17946.782: 33.7706% ( 76) 00:08:37.599 17946.782 - 18047.606: 35.3774% ( 109) 00:08:37.599 18047.606 - 18148.431: 37.0136% ( 111) 00:08:37.599 18148.431 - 18249.255: 38.5908% ( 107) 00:08:37.599 18249.255 - 18350.080: 40.3597% ( 120) 00:08:37.599 18350.080 - 18450.905: 42.0843% ( 117) 00:08:37.599 18450.905 - 18551.729: 44.0006% ( 130) 00:08:37.599 18551.729 - 18652.554: 46.2412% ( 152) 00:08:37.599 18652.554 - 18753.378: 49.0124% ( 188) 00:08:37.599 18753.378 - 18854.203: 51.7836% ( 188) 00:08:37.599 18854.203 - 18955.028: 54.3337% ( 173) 00:08:37.599 18955.028 - 19055.852: 56.7807% ( 166) 00:08:37.599 19055.852 - 19156.677: 59.1539% ( 161) 00:08:37.599 19156.677 - 19257.502: 61.1881% ( 138) 00:08:37.599 19257.502 - 19358.326: 63.3402% ( 146) 00:08:37.599 19358.326 - 19459.151: 65.3449% ( 136) 00:08:37.599 19459.151 - 19559.975: 67.0106% ( 113) 00:08:37.599 19559.975 - 19660.800: 68.9858% ( 134) 00:08:37.599 19660.800 - 19761.625: 71.2559% ( 154) 00:08:37.599 19761.625 - 19862.449: 73.4228% ( 147) 00:08:37.599 19862.449 - 19963.274: 75.3538% ( 131) 00:08:37.599 19963.274 - 20064.098: 77.0195% ( 113) 00:08:37.599 20064.098 - 20164.923: 78.4788% ( 99) 00:08:37.599 20164.923 - 20265.748: 79.7317% ( 85) 00:08:37.599 20265.748 - 20366.572: 80.9110% ( 80) 00:08:37.599 20366.572 - 20467.397: 82.2966% ( 94) 00:08:37.599 20467.397 - 20568.222: 83.8001% ( 102) 00:08:37.599 20568.222 - 20669.046: 85.1415% ( 91) 00:08:37.599 20669.046 - 20769.871: 86.2176% ( 73) 00:08:37.599 20769.871 - 20870.695: 87.2936% ( 73) 00:08:37.599 20870.695 - 20971.520: 88.0307% ( 50) 00:08:37.599 20971.520 - 21072.345: 88.9593% ( 63) 00:08:37.599 21072.345 - 21173.169: 89.7553% ( 54) 00:08:37.599 21173.169 - 21273.994: 90.5071% ( 51) 00:08:37.599 21273.994 - 21374.818: 91.1851% ( 46) 00:08:37.599 21374.818 - 21475.643: 91.7011% ( 35) 00:08:37.599 21475.643 - 21576.468: 92.1138% ( 28) 00:08:37.599 21576.468 - 21677.292: 92.6445% ( 36) 00:08:37.599 21677.292 - 21778.117: 93.1309% ( 33) 00:08:37.599 21778.117 - 21878.942: 93.5289% ( 27) 00:08:37.599 21878.942 - 21979.766: 93.8532% ( 22) 00:08:37.599 21979.766 - 22080.591: 94.1480% ( 20) 00:08:37.600 22080.591 - 22181.415: 94.4133% ( 18) 00:08:37.600 22181.415 - 22282.240: 94.6344% ( 15) 00:08:37.600 22282.240 - 22383.065: 94.8261% ( 13) 00:08:37.600 22383.065 - 22483.889: 94.9735% ( 10) 00:08:37.600 22483.889 - 22584.714: 95.1356% ( 11) 00:08:37.600 22584.714 - 22685.538: 95.3272% ( 13) 00:08:37.600 22685.538 - 22786.363: 95.7695% ( 30) 00:08:37.600 22786.363 - 22887.188: 96.0495% ( 19) 00:08:37.600 22887.188 - 22988.012: 96.2559% ( 14) 00:08:37.600 22988.012 - 23088.837: 96.6981% ( 30) 00:08:37.600 23088.837 - 23189.662: 96.9340% ( 16) 00:08:37.600 23189.662 - 23290.486: 97.1846% ( 17) 00:08:37.600 23290.486 - 23391.311: 97.4646% ( 19) 00:08:37.600 23391.311 - 23492.135: 97.8037% ( 23) 00:08:37.600 23492.135 - 23592.960: 97.8921% ( 6) 00:08:37.600 23592.960 - 23693.785: 97.9363% ( 3) 00:08:37.600 23693.785 - 23794.609: 97.9805% ( 3) 00:08:37.600 23794.609 - 23895.434: 98.0690% ( 6) 00:08:37.600 23895.434 - 23996.258: 98.1132% ( 3) 00:08:37.600 26012.751 - 26214.400: 98.1427% ( 2) 00:08:37.600 26214.400 - 26416.049: 98.2311% ( 6) 00:08:37.600 26416.049 - 26617.698: 98.3048% ( 5) 00:08:37.600 26617.698 - 26819.348: 98.3933% ( 6) 00:08:37.600 26819.348 - 27020.997: 98.4817% ( 6) 00:08:37.600 27020.997 - 27222.646: 98.5849% ( 7) 00:08:37.600 27222.646 - 27424.295: 98.6881% ( 7) 00:08:37.600 27424.295 - 27625.945: 98.8060% ( 8) 00:08:37.600 27625.945 - 27827.594: 98.9092% ( 7) 00:08:37.600 27827.594 - 28029.243: 99.0271% ( 8) 00:08:37.600 28029.243 - 28230.892: 99.0566% ( 2) 00:08:37.600 38313.354 - 38515.003: 99.1008% ( 3) 00:08:37.600 38515.003 - 38716.652: 99.1893% ( 6) 00:08:37.600 38716.652 - 38918.302: 99.2630% ( 5) 00:08:37.600 38918.302 - 39119.951: 99.3662% ( 7) 00:08:37.600 39119.951 - 39321.600: 99.4546% ( 6) 00:08:37.600 39321.600 - 39523.249: 99.5430% ( 6) 00:08:37.600 39523.249 - 39724.898: 99.6462% ( 7) 00:08:37.600 39724.898 - 39926.548: 99.7347% ( 6) 00:08:37.600 39926.548 - 40128.197: 99.8231% ( 6) 00:08:37.600 40128.197 - 40329.846: 99.9263% ( 7) 00:08:37.600 40329.846 - 40531.495: 99.9853% ( 4) 00:08:37.600 40531.495 - 40733.145: 100.0000% ( 1) 00:08:37.600 00:08:37.600 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:37.600 ============================================================================== 00:08:37.600 Range in us Cumulative IO count 00:08:37.600 12905.551 - 13006.375: 0.0295% ( 2) 00:08:37.600 13006.375 - 13107.200: 0.1916% ( 11) 00:08:37.600 13107.200 - 13208.025: 0.5749% ( 26) 00:08:37.600 13208.025 - 13308.849: 0.7223% ( 10) 00:08:37.600 13308.849 - 13409.674: 0.8255% ( 7) 00:08:37.600 13409.674 - 13510.498: 0.9287% ( 7) 00:08:37.600 13510.498 - 13611.323: 0.9434% ( 1) 00:08:37.600 14922.043 - 15022.868: 0.9581% ( 1) 00:08:37.600 15123.692 - 15224.517: 0.9729% ( 1) 00:08:37.600 15224.517 - 15325.342: 1.0908% ( 8) 00:08:37.600 15325.342 - 15426.166: 1.3119% ( 15) 00:08:37.600 15426.166 - 15526.991: 1.6657% ( 24) 00:08:37.600 15526.991 - 15627.815: 2.0489% ( 26) 00:08:37.600 15627.815 - 15728.640: 2.4175% ( 25) 00:08:37.600 15728.640 - 15829.465: 3.2282% ( 55) 00:08:37.600 15829.465 - 15930.289: 3.6262% ( 27) 00:08:37.600 15930.289 - 16031.114: 4.0979% ( 32) 00:08:37.600 16031.114 - 16131.938: 5.0413% ( 64) 00:08:37.600 16131.938 - 16232.763: 6.2205% ( 80) 00:08:37.600 16232.763 - 16333.588: 7.6946% ( 100) 00:08:37.600 16333.588 - 16434.412: 9.1834% ( 101) 00:08:37.600 16434.412 - 16535.237: 10.6574% ( 100) 00:08:37.600 16535.237 - 16636.062: 12.2789% ( 110) 00:08:37.600 16636.062 - 16736.886: 13.9741% ( 115) 00:08:37.600 16736.886 - 16837.711: 15.5218% ( 105) 00:08:37.600 16837.711 - 16938.535: 16.9369% ( 96) 00:08:37.600 16938.535 - 17039.360: 18.5879% ( 112) 00:08:37.600 17039.360 - 17140.185: 20.5926% ( 136) 00:08:37.600 17140.185 - 17241.009: 22.9068% ( 157) 00:08:37.600 17241.009 - 17341.834: 24.5873% ( 114) 00:08:37.600 17341.834 - 17442.658: 26.2824% ( 115) 00:08:37.600 17442.658 - 17543.483: 28.5230% ( 152) 00:08:37.600 17543.483 - 17644.308: 30.5719% ( 139) 00:08:37.600 17644.308 - 17745.132: 32.6798% ( 143) 00:08:37.600 17745.132 - 17845.957: 34.3603% ( 114) 00:08:37.600 17845.957 - 17946.782: 35.6574% ( 88) 00:08:37.600 17946.782 - 18047.606: 37.2789% ( 110) 00:08:37.600 18047.606 - 18148.431: 38.7677% ( 101) 00:08:37.600 18148.431 - 18249.255: 40.5218% ( 119) 00:08:37.600 18249.255 - 18350.080: 41.9811% ( 99) 00:08:37.600 18350.080 - 18450.905: 44.0448% ( 140) 00:08:37.600 18450.905 - 18551.729: 46.2854% ( 152) 00:08:37.600 18551.729 - 18652.554: 48.9387% ( 180) 00:08:37.600 18652.554 - 18753.378: 51.1203% ( 148) 00:08:37.600 18753.378 - 18854.203: 52.9629% ( 125) 00:08:37.600 18854.203 - 18955.028: 55.0560% ( 142) 00:08:37.600 18955.028 - 19055.852: 57.3998% ( 159) 00:08:37.600 19055.852 - 19156.677: 59.9499% ( 173) 00:08:37.600 19156.677 - 19257.502: 62.1757% ( 151) 00:08:37.600 19257.502 - 19358.326: 65.1828% ( 204) 00:08:37.600 19358.326 - 19459.151: 67.9688% ( 189) 00:08:37.600 19459.151 - 19559.975: 70.0029% ( 138) 00:08:37.600 19559.975 - 19660.800: 71.8013% ( 122) 00:08:37.600 19660.800 - 19761.625: 73.6881% ( 128) 00:08:37.600 19761.625 - 19862.449: 75.2801% ( 108) 00:08:37.600 19862.449 - 19963.274: 76.6804% ( 95) 00:08:37.600 19963.274 - 20064.098: 78.1692% ( 101) 00:08:37.600 20064.098 - 20164.923: 79.5254% ( 92) 00:08:37.600 20164.923 - 20265.748: 80.7341% ( 82) 00:08:37.600 20265.748 - 20366.572: 81.9575% ( 83) 00:08:37.600 20366.572 - 20467.397: 83.4463% ( 101) 00:08:37.600 20467.397 - 20568.222: 84.3013% ( 58) 00:08:37.600 20568.222 - 20669.046: 85.1562% ( 58) 00:08:37.600 20669.046 - 20769.871: 86.0112% ( 58) 00:08:37.600 20769.871 - 20870.695: 86.9546% ( 64) 00:08:37.600 20870.695 - 20971.520: 87.8685% ( 62) 00:08:37.600 20971.520 - 21072.345: 88.7235% ( 58) 00:08:37.600 21072.345 - 21173.169: 89.3131% ( 40) 00:08:37.600 21173.169 - 21273.994: 89.9764% ( 45) 00:08:37.600 21273.994 - 21374.818: 90.9051% ( 63) 00:08:37.600 21374.818 - 21475.643: 91.3473% ( 30) 00:08:37.600 21475.643 - 21576.468: 91.7600% ( 28) 00:08:37.600 21576.468 - 21677.292: 92.1285% ( 25) 00:08:37.600 21677.292 - 21778.117: 92.6297% ( 34) 00:08:37.600 21778.117 - 21878.942: 92.9245% ( 20) 00:08:37.600 21878.942 - 21979.766: 93.2341% ( 21) 00:08:37.600 21979.766 - 22080.591: 93.4552% ( 15) 00:08:37.600 22080.591 - 22181.415: 93.6321% ( 12) 00:08:37.600 22181.415 - 22282.240: 93.9711% ( 23) 00:08:37.600 22282.240 - 22383.065: 94.1480% ( 12) 00:08:37.600 22383.065 - 22483.889: 94.3101% ( 11) 00:08:37.600 22483.889 - 22584.714: 94.5165% ( 14) 00:08:37.600 22584.714 - 22685.538: 94.7524% ( 16) 00:08:37.600 22685.538 - 22786.363: 95.0914% ( 23) 00:08:37.600 22786.363 - 22887.188: 95.5483% ( 31) 00:08:37.600 22887.188 - 22988.012: 96.1380% ( 40) 00:08:37.600 22988.012 - 23088.837: 96.4475% ( 21) 00:08:37.600 23088.837 - 23189.662: 96.7718% ( 22) 00:08:37.600 23189.662 - 23290.486: 97.0077% ( 16) 00:08:37.600 23290.486 - 23391.311: 97.1846% ( 12) 00:08:37.600 23391.311 - 23492.135: 97.3614% ( 12) 00:08:37.600 23492.135 - 23592.960: 97.5678% ( 14) 00:08:37.600 23592.960 - 23693.785: 97.7152% ( 10) 00:08:37.600 23693.785 - 23794.609: 97.8184% ( 7) 00:08:37.600 23794.609 - 23895.434: 97.9511% ( 9) 00:08:37.600 23895.434 - 23996.258: 98.0542% ( 7) 00:08:37.600 23996.258 - 24097.083: 98.1132% ( 4) 00:08:37.600 27222.646 - 27424.295: 98.1722% ( 4) 00:08:37.600 27424.295 - 27625.945: 98.3048% ( 9) 00:08:37.600 27625.945 - 27827.594: 98.4080% ( 7) 00:08:37.600 27827.594 - 28029.243: 98.5407% ( 9) 00:08:37.600 28029.243 - 28230.892: 98.6439% ( 7) 00:08:37.600 28230.892 - 28432.542: 98.7471% ( 7) 00:08:37.600 28432.542 - 28634.191: 98.8650% ( 8) 00:08:37.600 28634.191 - 28835.840: 98.9976% ( 9) 00:08:37.600 28835.840 - 29037.489: 99.0566% ( 4) 00:08:37.600 37708.406 - 37910.055: 99.1893% ( 9) 00:08:37.600 37910.055 - 38111.705: 99.3072% ( 8) 00:08:37.600 38111.705 - 38313.354: 99.4546% ( 10) 00:08:37.600 38313.354 - 38515.003: 99.5873% ( 9) 00:08:37.600 38515.003 - 38716.652: 99.7347% ( 10) 00:08:37.600 38716.652 - 38918.302: 99.8379% ( 7) 00:08:37.600 38918.302 - 39119.951: 99.9410% ( 7) 00:08:37.600 39119.951 - 39321.600: 100.0000% ( 4) 00:08:37.600 00:08:37.600 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:37.600 ============================================================================== 00:08:37.600 Range in us Cumulative IO count 00:08:37.600 12149.366 - 12199.778: 0.1474% ( 10) 00:08:37.600 12199.778 - 12250.191: 0.1621% ( 1) 00:08:37.600 12250.191 - 12300.603: 0.2064% ( 3) 00:08:37.600 12300.603 - 12351.015: 0.2653% ( 4) 00:08:37.600 12351.015 - 12401.428: 0.3538% ( 6) 00:08:37.600 12401.428 - 12451.840: 0.3833% ( 2) 00:08:37.600 12451.840 - 12502.252: 0.4275% ( 3) 00:08:37.600 12502.252 - 12552.665: 0.5012% ( 5) 00:08:37.600 12552.665 - 12603.077: 0.5454% ( 3) 00:08:37.600 12603.077 - 12653.489: 0.5896% ( 3) 00:08:37.600 12653.489 - 12703.902: 0.6486% ( 4) 00:08:37.600 12703.902 - 12754.314: 0.7075% ( 4) 00:08:37.600 12754.314 - 12804.726: 0.7518% ( 3) 00:08:37.600 12804.726 - 12855.138: 0.7960% ( 3) 00:08:37.600 13006.375 - 13107.200: 0.8550% ( 4) 00:08:37.600 13107.200 - 13208.025: 0.9287% ( 5) 00:08:37.600 13208.025 - 13308.849: 0.9434% ( 1) 00:08:37.600 14619.569 - 14720.394: 0.9729% ( 2) 00:08:37.600 14821.218 - 14922.043: 1.0024% ( 2) 00:08:37.600 14922.043 - 15022.868: 1.0466% ( 3) 00:08:37.600 15022.868 - 15123.692: 1.0613% ( 1) 00:08:37.600 15123.692 - 15224.517: 1.2824% ( 15) 00:08:37.600 15224.517 - 15325.342: 1.4741% ( 13) 00:08:37.600 15325.342 - 15426.166: 1.5920% ( 8) 00:08:37.600 15426.166 - 15526.991: 1.9310% ( 23) 00:08:37.600 15526.991 - 15627.815: 2.3290% ( 27) 00:08:37.600 15627.815 - 15728.640: 3.2429% ( 62) 00:08:37.600 15728.640 - 15829.465: 3.8768% ( 43) 00:08:37.600 15829.465 - 15930.289: 4.5991% ( 49) 00:08:37.601 15930.289 - 16031.114: 5.2182% ( 42) 00:08:37.601 16031.114 - 16131.938: 6.2058% ( 67) 00:08:37.601 16131.938 - 16232.763: 7.2229% ( 69) 00:08:37.601 16232.763 - 16333.588: 8.2695% ( 71) 00:08:37.601 16333.588 - 16434.412: 9.3750% ( 75) 00:08:37.601 16434.412 - 16535.237: 10.8491% ( 100) 00:08:37.601 16535.237 - 16636.062: 12.5442% ( 115) 00:08:37.601 16636.062 - 16736.886: 14.6521% ( 143) 00:08:37.601 16736.886 - 16837.711: 16.5242% ( 127) 00:08:37.601 16837.711 - 16938.535: 18.3078% ( 121) 00:08:37.601 16938.535 - 17039.360: 20.2241% ( 130) 00:08:37.601 17039.360 - 17140.185: 22.1256% ( 129) 00:08:37.601 17140.185 - 17241.009: 24.0271% ( 129) 00:08:37.601 17241.009 - 17341.834: 25.5012% ( 100) 00:08:37.601 17341.834 - 17442.658: 27.4322% ( 131) 00:08:37.601 17442.658 - 17543.483: 29.4369% ( 136) 00:08:37.601 17543.483 - 17644.308: 31.2500% ( 123) 00:08:37.601 17644.308 - 17745.132: 33.3284% ( 141) 00:08:37.601 17745.132 - 17845.957: 35.0531% ( 117) 00:08:37.601 17845.957 - 17946.782: 36.8367% ( 121) 00:08:37.601 17946.782 - 18047.606: 39.0625% ( 151) 00:08:37.601 18047.606 - 18148.431: 41.2736% ( 150) 00:08:37.601 18148.431 - 18249.255: 43.3520% ( 141) 00:08:37.601 18249.255 - 18350.080: 45.4599% ( 143) 00:08:37.601 18350.080 - 18450.905: 47.2288% ( 120) 00:08:37.601 18450.905 - 18551.729: 49.1303% ( 129) 00:08:37.601 18551.729 - 18652.554: 51.1350% ( 136) 00:08:37.601 18652.554 - 18753.378: 53.3166% ( 148) 00:08:37.601 18753.378 - 18854.203: 55.3950% ( 141) 00:08:37.601 18854.203 - 18955.028: 57.3408% ( 132) 00:08:37.601 18955.028 - 19055.852: 59.1244% ( 121) 00:08:37.601 19055.852 - 19156.677: 61.3355% ( 150) 00:08:37.601 19156.677 - 19257.502: 63.0454% ( 116) 00:08:37.601 19257.502 - 19358.326: 64.9175% ( 127) 00:08:37.601 19358.326 - 19459.151: 66.5389% ( 110) 00:08:37.601 19459.151 - 19559.975: 68.1604% ( 110) 00:08:37.601 19559.975 - 19660.800: 69.9587% ( 122) 00:08:37.601 19660.800 - 19761.625: 71.6097% ( 112) 00:08:37.601 19761.625 - 19862.449: 73.4228% ( 123) 00:08:37.601 19862.449 - 19963.274: 75.0442% ( 110) 00:08:37.601 19963.274 - 20064.098: 76.7541% ( 116) 00:08:37.601 20064.098 - 20164.923: 78.0955% ( 91) 00:08:37.601 20164.923 - 20265.748: 79.0979% ( 68) 00:08:37.601 20265.748 - 20366.572: 80.2329% ( 77) 00:08:37.601 20366.572 - 20467.397: 80.9994% ( 52) 00:08:37.601 20467.397 - 20568.222: 82.0165% ( 69) 00:08:37.601 20568.222 - 20669.046: 82.9009% ( 60) 00:08:37.601 20669.046 - 20769.871: 83.9770% ( 73) 00:08:37.601 20769.871 - 20870.695: 84.8467% ( 59) 00:08:37.601 20870.695 - 20971.520: 85.7164% ( 59) 00:08:37.601 20971.520 - 21072.345: 86.4092% ( 47) 00:08:37.601 21072.345 - 21173.169: 87.2642% ( 58) 00:08:37.601 21173.169 - 21273.994: 88.1928% ( 63) 00:08:37.601 21273.994 - 21374.818: 89.2394% ( 71) 00:08:37.601 21374.818 - 21475.643: 89.9912% ( 51) 00:08:37.601 21475.643 - 21576.468: 90.5955% ( 41) 00:08:37.601 21576.468 - 21677.292: 91.1557% ( 38) 00:08:37.601 21677.292 - 21778.117: 91.7453% ( 40) 00:08:37.601 21778.117 - 21878.942: 91.9811% ( 16) 00:08:37.601 21878.942 - 21979.766: 92.4086% ( 29) 00:08:37.601 21979.766 - 22080.591: 92.8803% ( 32) 00:08:37.601 22080.591 - 22181.415: 93.5289% ( 44) 00:08:37.601 22181.415 - 22282.240: 94.2512% ( 49) 00:08:37.601 22282.240 - 22383.065: 94.9735% ( 49) 00:08:37.601 22383.065 - 22483.889: 95.3715% ( 27) 00:08:37.601 22483.889 - 22584.714: 95.8284% ( 31) 00:08:37.601 22584.714 - 22685.538: 96.1232% ( 20) 00:08:37.601 22685.538 - 22786.363: 96.4770% ( 24) 00:08:37.601 22786.363 - 22887.188: 96.6539% ( 12) 00:08:37.601 22887.188 - 22988.012: 96.8308% ( 12) 00:08:37.601 22988.012 - 23088.837: 96.9929% ( 11) 00:08:37.601 23088.837 - 23189.662: 97.1256% ( 9) 00:08:37.601 23189.662 - 23290.486: 97.2877% ( 11) 00:08:37.601 23290.486 - 23391.311: 97.3467% ( 4) 00:08:37.601 23391.311 - 23492.135: 97.5531% ( 14) 00:08:37.601 23492.135 - 23592.960: 97.6710% ( 8) 00:08:37.601 23592.960 - 23693.785: 97.8184% ( 10) 00:08:37.601 23693.785 - 23794.609: 97.9216% ( 7) 00:08:37.601 23794.609 - 23895.434: 97.9953% ( 5) 00:08:37.601 23895.434 - 23996.258: 98.0248% ( 2) 00:08:37.601 23996.258 - 24097.083: 98.0690% ( 3) 00:08:37.601 24097.083 - 24197.908: 98.1132% ( 3) 00:08:37.601 27222.646 - 27424.295: 98.2164% ( 7) 00:08:37.601 27424.295 - 27625.945: 98.3343% ( 8) 00:08:37.601 27625.945 - 27827.594: 98.4228% ( 6) 00:08:37.601 27827.594 - 28029.243: 98.5259% ( 7) 00:08:37.601 28029.243 - 28230.892: 98.6144% ( 6) 00:08:37.601 28230.892 - 28432.542: 98.7176% ( 7) 00:08:37.601 28432.542 - 28634.191: 98.8060% ( 6) 00:08:37.601 28634.191 - 28835.840: 98.8945% ( 6) 00:08:37.601 28835.840 - 29037.489: 98.9976% ( 7) 00:08:37.601 29037.489 - 29239.138: 99.0566% ( 4) 00:08:37.601 37708.406 - 37910.055: 99.1450% ( 6) 00:08:37.601 37910.055 - 38111.705: 99.2335% ( 6) 00:08:37.601 38111.705 - 38313.354: 99.3662% ( 9) 00:08:37.601 38313.354 - 38515.003: 99.4251% ( 4) 00:08:37.601 38515.003 - 38716.652: 99.5283% ( 7) 00:08:37.601 38716.652 - 38918.302: 99.6610% ( 9) 00:08:37.601 38918.302 - 39119.951: 99.7936% ( 9) 00:08:37.601 39119.951 - 39321.600: 99.9263% ( 9) 00:08:37.601 39321.600 - 39523.249: 100.0000% ( 5) 00:08:37.601 00:08:37.601 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:37.601 ============================================================================== 00:08:37.601 Range in us Cumulative IO count 00:08:37.601 11342.769 - 11393.182: 0.0147% ( 1) 00:08:37.601 11393.182 - 11443.594: 0.1032% ( 6) 00:08:37.601 11443.594 - 11494.006: 0.2064% ( 7) 00:08:37.601 11494.006 - 11544.418: 0.4127% ( 14) 00:08:37.601 11544.418 - 11594.831: 0.6044% ( 13) 00:08:37.601 11594.831 - 11645.243: 0.6486% ( 3) 00:08:37.601 11645.243 - 11695.655: 0.6928% ( 3) 00:08:37.601 11695.655 - 11746.068: 0.7370% ( 3) 00:08:37.601 11746.068 - 11796.480: 0.7665% ( 2) 00:08:37.601 11796.480 - 11846.892: 0.8107% ( 3) 00:08:37.601 11846.892 - 11897.305: 0.8550% ( 3) 00:08:37.601 11897.305 - 11947.717: 0.8697% ( 1) 00:08:37.601 11947.717 - 11998.129: 0.9139% ( 3) 00:08:37.601 11998.129 - 12048.542: 0.9434% ( 2) 00:08:37.601 15022.868 - 15123.692: 0.9581% ( 1) 00:08:37.601 15123.692 - 15224.517: 1.0318% ( 5) 00:08:37.601 15224.517 - 15325.342: 1.2235% ( 13) 00:08:37.601 15325.342 - 15426.166: 1.5478% ( 22) 00:08:37.601 15426.166 - 15526.991: 1.9458% ( 27) 00:08:37.601 15526.991 - 15627.815: 2.4912% ( 37) 00:08:37.601 15627.815 - 15728.640: 3.0366% ( 37) 00:08:37.601 15728.640 - 15829.465: 3.6851% ( 44) 00:08:37.601 15829.465 - 15930.289: 4.4811% ( 54) 00:08:37.601 15930.289 - 16031.114: 5.3950% ( 62) 00:08:37.601 16031.114 - 16131.938: 6.3974% ( 68) 00:08:37.601 16131.938 - 16232.763: 7.7535% ( 92) 00:08:37.601 16232.763 - 16333.588: 9.0212% ( 86) 00:08:37.601 16333.588 - 16434.412: 10.6722% ( 112) 00:08:37.601 16434.412 - 16535.237: 11.6598% ( 67) 00:08:37.601 16535.237 - 16636.062: 12.7358% ( 73) 00:08:37.601 16636.062 - 16736.886: 14.0035% ( 86) 00:08:37.601 16736.886 - 16837.711: 15.5660% ( 106) 00:08:37.601 16837.711 - 16938.535: 17.0548% ( 101) 00:08:37.601 16938.535 - 17039.360: 18.4552% ( 95) 00:08:37.601 17039.360 - 17140.185: 20.3272% ( 127) 00:08:37.601 17140.185 - 17241.009: 22.7005% ( 161) 00:08:37.601 17241.009 - 17341.834: 25.9287% ( 219) 00:08:37.601 17341.834 - 17442.658: 28.0071% ( 141) 00:08:37.601 17442.658 - 17543.483: 29.9676% ( 133) 00:08:37.601 17543.483 - 17644.308: 31.7070% ( 118) 00:08:37.601 17644.308 - 17745.132: 33.1073% ( 95) 00:08:37.601 17745.132 - 17845.957: 34.8614% ( 119) 00:08:37.601 17845.957 - 17946.782: 36.4534% ( 108) 00:08:37.601 17946.782 - 18047.606: 37.9864% ( 104) 00:08:37.601 18047.606 - 18148.431: 39.5932% ( 109) 00:08:37.601 18148.431 - 18249.255: 41.2294% ( 111) 00:08:37.601 18249.255 - 18350.080: 42.7182% ( 101) 00:08:37.601 18350.080 - 18450.905: 44.3101% ( 108) 00:08:37.601 18450.905 - 18551.729: 45.8432% ( 104) 00:08:37.601 18551.729 - 18652.554: 47.8774% ( 138) 00:08:37.601 18652.554 - 18753.378: 50.2948% ( 164) 00:08:37.601 18753.378 - 18854.203: 52.7417% ( 166) 00:08:37.601 18854.203 - 18955.028: 55.1592% ( 164) 00:08:37.601 18955.028 - 19055.852: 57.6061% ( 166) 00:08:37.601 19055.852 - 19156.677: 60.2005% ( 176) 00:08:37.601 19156.677 - 19257.502: 62.4116% ( 150) 00:08:37.601 19257.502 - 19358.326: 64.9322% ( 171) 00:08:37.601 19358.326 - 19459.151: 66.8632% ( 131) 00:08:37.601 19459.151 - 19559.975: 68.8532% ( 135) 00:08:37.601 19559.975 - 19660.800: 71.1675% ( 157) 00:08:37.601 19660.800 - 19761.625: 72.7594% ( 108) 00:08:37.601 19761.625 - 19862.449: 74.5725% ( 123) 00:08:37.601 19862.449 - 19963.274: 76.1350% ( 106) 00:08:37.601 19963.274 - 20064.098: 77.8154% ( 114) 00:08:37.601 20064.098 - 20164.923: 79.2305% ( 96) 00:08:37.601 20164.923 - 20265.748: 80.6899% ( 99) 00:08:37.601 20265.748 - 20366.572: 81.7512% ( 72) 00:08:37.601 20366.572 - 20467.397: 82.5472% ( 54) 00:08:37.601 20467.397 - 20568.222: 83.3874% ( 57) 00:08:37.601 20568.222 - 20669.046: 84.2866% ( 61) 00:08:37.601 20669.046 - 20769.871: 85.0825% ( 54) 00:08:37.601 20769.871 - 20870.695: 85.9228% ( 57) 00:08:37.601 20870.695 - 20971.520: 86.6893% ( 52) 00:08:37.601 20971.520 - 21072.345: 87.3968% ( 48) 00:08:37.601 21072.345 - 21173.169: 88.1486% ( 51) 00:08:37.601 21173.169 - 21273.994: 89.0625% ( 62) 00:08:37.601 21273.994 - 21374.818: 89.6963% ( 43) 00:08:37.601 21374.818 - 21475.643: 90.4923% ( 54) 00:08:37.601 21475.643 - 21576.468: 91.0967% ( 41) 00:08:37.601 21576.468 - 21677.292: 91.5831% ( 33) 00:08:37.601 21677.292 - 21778.117: 92.1875% ( 41) 00:08:37.601 21778.117 - 21878.942: 92.9393% ( 51) 00:08:37.601 21878.942 - 21979.766: 93.4110% ( 32) 00:08:37.601 21979.766 - 22080.591: 93.7795% ( 25) 00:08:37.601 22080.591 - 22181.415: 94.2217% ( 30) 00:08:37.601 22181.415 - 22282.240: 94.7818% ( 38) 00:08:37.601 22282.240 - 22383.065: 95.3272% ( 37) 00:08:37.601 22383.065 - 22483.889: 95.8874% ( 38) 00:08:37.602 22483.889 - 22584.714: 96.2854% ( 27) 00:08:37.602 22584.714 - 22685.538: 96.6686% ( 26) 00:08:37.602 22685.538 - 22786.363: 97.0224% ( 24) 00:08:37.602 22786.363 - 22887.188: 97.4204% ( 27) 00:08:37.602 22887.188 - 22988.012: 97.5973% ( 12) 00:08:37.602 22988.012 - 23088.837: 97.7300% ( 9) 00:08:37.602 23088.837 - 23189.662: 97.8479% ( 8) 00:08:37.602 23189.662 - 23290.486: 97.9953% ( 10) 00:08:37.602 23290.486 - 23391.311: 98.0690% ( 5) 00:08:37.602 23391.311 - 23492.135: 98.1132% ( 3) 00:08:37.602 27020.997 - 27222.646: 98.1869% ( 5) 00:08:37.602 27222.646 - 27424.295: 98.3491% ( 11) 00:08:37.602 27424.295 - 27625.945: 98.4965% ( 10) 00:08:37.602 27625.945 - 27827.594: 98.6586% ( 11) 00:08:37.602 27827.594 - 28029.243: 98.7765% ( 8) 00:08:37.602 28029.243 - 28230.892: 98.8797% ( 7) 00:08:37.602 28230.892 - 28432.542: 98.9829% ( 7) 00:08:37.602 28432.542 - 28634.191: 99.0566% ( 5) 00:08:37.602 36901.809 - 37103.458: 99.1008% ( 3) 00:08:37.602 37103.458 - 37305.108: 99.1598% ( 4) 00:08:37.602 37305.108 - 37506.757: 99.2188% ( 4) 00:08:37.602 37708.406 - 37910.055: 99.3219% ( 7) 00:08:37.602 37910.055 - 38111.705: 99.4546% ( 9) 00:08:37.602 38111.705 - 38313.354: 99.5578% ( 7) 00:08:37.602 38313.354 - 38515.003: 99.6757% ( 8) 00:08:37.602 38515.003 - 38716.652: 99.7936% ( 8) 00:08:37.602 38716.652 - 38918.302: 99.8968% ( 7) 00:08:37.602 38918.302 - 39119.951: 100.0000% ( 7) 00:08:37.602 00:08:37.602 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:37.602 ============================================================================== 00:08:37.602 Range in us Cumulative IO count 00:08:37.602 8267.618 - 8318.031: 0.0442% ( 3) 00:08:37.602 8318.031 - 8368.443: 0.0884% ( 3) 00:08:37.602 8368.443 - 8418.855: 0.1179% ( 2) 00:08:37.602 8418.855 - 8469.268: 0.2064% ( 6) 00:08:37.602 8469.268 - 8519.680: 0.3096% ( 7) 00:08:37.602 8519.680 - 8570.092: 0.4275% ( 8) 00:08:37.602 8570.092 - 8620.505: 0.5454% ( 8) 00:08:37.602 8620.505 - 8670.917: 0.6338% ( 6) 00:08:37.602 8670.917 - 8721.329: 0.6781% ( 3) 00:08:37.602 8721.329 - 8771.742: 0.7223% ( 3) 00:08:37.602 8771.742 - 8822.154: 0.7665% ( 3) 00:08:37.602 8822.154 - 8872.566: 0.7960% ( 2) 00:08:37.602 8872.566 - 8922.978: 0.8402% ( 3) 00:08:37.602 8922.978 - 8973.391: 0.8697% ( 2) 00:08:37.602 8973.391 - 9023.803: 0.9139% ( 3) 00:08:37.602 9023.803 - 9074.215: 0.9287% ( 1) 00:08:37.602 9074.215 - 9124.628: 0.9434% ( 1) 00:08:37.602 14821.218 - 14922.043: 1.0466% ( 7) 00:08:37.602 14922.043 - 15022.868: 1.2382% ( 13) 00:08:37.602 15022.868 - 15123.692: 1.7246% ( 33) 00:08:37.602 15123.692 - 15224.517: 2.0195% ( 20) 00:08:37.602 15224.517 - 15325.342: 2.7270% ( 48) 00:08:37.602 15325.342 - 15426.166: 3.1397% ( 28) 00:08:37.602 15426.166 - 15526.991: 3.6262% ( 33) 00:08:37.602 15526.991 - 15627.815: 4.2158% ( 40) 00:08:37.602 15627.815 - 15728.640: 4.8791% ( 45) 00:08:37.602 15728.640 - 15829.465: 6.0142% ( 77) 00:08:37.602 15829.465 - 15930.289: 6.8691% ( 58) 00:08:37.602 15930.289 - 16031.114: 7.8420% ( 66) 00:08:37.602 16031.114 - 16131.938: 8.8296% ( 67) 00:08:37.602 16131.938 - 16232.763: 9.8762% ( 71) 00:08:37.602 16232.763 - 16333.588: 10.9228% ( 71) 00:08:37.602 16333.588 - 16434.412: 11.9546% ( 70) 00:08:37.602 16434.412 - 16535.237: 13.1928% ( 84) 00:08:37.602 16535.237 - 16636.062: 14.6669% ( 100) 00:08:37.602 16636.062 - 16736.886: 15.7134% ( 71) 00:08:37.602 16736.886 - 16837.711: 16.8042% ( 74) 00:08:37.602 16837.711 - 16938.535: 18.1751% ( 93) 00:08:37.602 16938.535 - 17039.360: 20.0029% ( 124) 00:08:37.602 17039.360 - 17140.185: 21.9045% ( 129) 00:08:37.602 17140.185 - 17241.009: 23.7323% ( 124) 00:08:37.602 17241.009 - 17341.834: 25.5307% ( 122) 00:08:37.602 17341.834 - 17442.658: 27.0784% ( 105) 00:08:37.602 17442.658 - 17543.483: 28.3756% ( 88) 00:08:37.602 17543.483 - 17644.308: 30.1739% ( 122) 00:08:37.602 17644.308 - 17745.132: 31.8986% ( 117) 00:08:37.602 17745.132 - 17845.957: 33.4021% ( 102) 00:08:37.602 17845.957 - 17946.782: 34.9351% ( 104) 00:08:37.602 17946.782 - 18047.606: 36.4534% ( 103) 00:08:37.602 18047.606 - 18148.431: 38.4876% ( 138) 00:08:37.602 18148.431 - 18249.255: 40.0354% ( 105) 00:08:37.602 18249.255 - 18350.080: 42.0843% ( 139) 00:08:37.602 18350.080 - 18450.905: 44.2954% ( 150) 00:08:37.602 18450.905 - 18551.729: 46.3738% ( 141) 00:08:37.602 18551.729 - 18652.554: 48.8208% ( 166) 00:08:37.602 18652.554 - 18753.378: 50.7960% ( 134) 00:08:37.602 18753.378 - 18854.203: 53.0955% ( 156) 00:08:37.602 18854.203 - 18955.028: 55.0560% ( 133) 00:08:37.602 18955.028 - 19055.852: 57.3998% ( 159) 00:08:37.602 19055.852 - 19156.677: 59.9646% ( 174) 00:08:37.602 19156.677 - 19257.502: 62.0430% ( 141) 00:08:37.602 19257.502 - 19358.326: 64.2246% ( 148) 00:08:37.602 19358.326 - 19459.151: 66.3473% ( 144) 00:08:37.602 19459.151 - 19559.975: 68.3225% ( 134) 00:08:37.602 19559.975 - 19660.800: 70.3567% ( 138) 00:08:37.602 19660.800 - 19761.625: 72.4057% ( 139) 00:08:37.602 19761.625 - 19862.449: 74.2925% ( 128) 00:08:37.602 19862.449 - 19963.274: 76.2087% ( 130) 00:08:37.602 19963.274 - 20064.098: 77.7270% ( 103) 00:08:37.602 20064.098 - 20164.923: 79.1863% ( 99) 00:08:37.602 20164.923 - 20265.748: 80.4245% ( 84) 00:08:37.602 20265.748 - 20366.572: 81.6775% ( 85) 00:08:37.602 20366.572 - 20467.397: 82.7093% ( 70) 00:08:37.602 20467.397 - 20568.222: 83.8149% ( 75) 00:08:37.602 20568.222 - 20669.046: 84.6108% ( 54) 00:08:37.602 20669.046 - 20769.871: 85.2152% ( 41) 00:08:37.602 20769.871 - 20870.695: 85.8785% ( 45) 00:08:37.602 20870.695 - 20971.520: 86.6303% ( 51) 00:08:37.602 20971.520 - 21072.345: 87.2347% ( 41) 00:08:37.602 21072.345 - 21173.169: 87.8538% ( 42) 00:08:37.602 21173.169 - 21273.994: 88.5171% ( 45) 00:08:37.602 21273.994 - 21374.818: 89.1804% ( 45) 00:08:37.602 21374.818 - 21475.643: 89.9027% ( 49) 00:08:37.602 21475.643 - 21576.468: 90.7577% ( 58) 00:08:37.602 21576.468 - 21677.292: 91.5242% ( 52) 00:08:37.602 21677.292 - 21778.117: 92.1285% ( 41) 00:08:37.602 21778.117 - 21878.942: 92.6297% ( 34) 00:08:37.602 21878.942 - 21979.766: 93.2193% ( 40) 00:08:37.602 21979.766 - 22080.591: 93.7942% ( 39) 00:08:37.602 22080.591 - 22181.415: 94.3249% ( 36) 00:08:37.602 22181.415 - 22282.240: 94.7229% ( 27) 00:08:37.602 22282.240 - 22383.065: 95.0324% ( 21) 00:08:37.602 22383.065 - 22483.889: 95.2830% ( 17) 00:08:37.602 22483.889 - 22584.714: 95.4894% ( 14) 00:08:37.602 22584.714 - 22685.538: 95.7842% ( 20) 00:08:37.602 22685.538 - 22786.363: 96.1527% ( 25) 00:08:37.602 22786.363 - 22887.188: 96.5949% ( 30) 00:08:37.602 22887.188 - 22988.012: 96.8160% ( 15) 00:08:37.602 22988.012 - 23088.837: 97.1256% ( 21) 00:08:37.602 23088.837 - 23189.662: 97.4204% ( 20) 00:08:37.602 23189.662 - 23290.486: 97.7005% ( 19) 00:08:37.602 23290.486 - 23391.311: 97.8921% ( 13) 00:08:37.602 23391.311 - 23492.135: 97.9805% ( 6) 00:08:37.602 23492.135 - 23592.960: 98.0690% ( 6) 00:08:37.602 23592.960 - 23693.785: 98.1132% ( 3) 00:08:37.602 28230.892 - 28432.542: 98.2017% ( 6) 00:08:37.602 28432.542 - 28634.191: 98.3343% ( 9) 00:08:37.602 28634.191 - 28835.840: 98.4817% ( 10) 00:08:37.602 28835.840 - 29037.489: 98.6439% ( 11) 00:08:37.602 29037.489 - 29239.138: 98.7765% ( 9) 00:08:37.602 29239.138 - 29440.788: 98.8650% ( 6) 00:08:37.602 29440.788 - 29642.437: 98.9534% ( 6) 00:08:37.602 29642.437 - 29844.086: 99.0566% ( 7) 00:08:37.602 37910.055 - 38111.705: 99.0861% ( 2) 00:08:37.602 38111.705 - 38313.354: 99.1745% ( 6) 00:08:37.602 38313.354 - 38515.003: 99.2630% ( 6) 00:08:37.602 38716.652 - 38918.302: 99.3219% ( 4) 00:08:37.602 38918.302 - 39119.951: 99.4251% ( 7) 00:08:37.602 39119.951 - 39321.600: 99.5136% ( 6) 00:08:37.602 39321.600 - 39523.249: 99.6462% ( 9) 00:08:37.602 39523.249 - 39724.898: 99.8526% ( 14) 00:08:37.602 39724.898 - 39926.548: 99.9705% ( 8) 00:08:37.602 39926.548 - 40128.197: 100.0000% ( 2) 00:08:37.602 00:08:37.602 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:37.602 ============================================================================== 00:08:37.602 Range in us Cumulative IO count 00:08:37.602 6856.074 - 6906.486: 0.0147% ( 1) 00:08:37.602 6906.486 - 6956.898: 0.1179% ( 7) 00:08:37.602 6956.898 - 7007.311: 0.2211% ( 7) 00:08:37.602 7007.311 - 7057.723: 0.3390% ( 8) 00:08:37.602 7057.723 - 7108.135: 0.3980% ( 4) 00:08:37.602 7108.135 - 7158.548: 0.4570% ( 4) 00:08:37.602 7158.548 - 7208.960: 0.4864% ( 2) 00:08:37.602 7208.960 - 7259.372: 0.5307% ( 3) 00:08:37.602 7259.372 - 7309.785: 0.5601% ( 2) 00:08:37.602 7309.785 - 7360.197: 0.6044% ( 3) 00:08:37.602 7360.197 - 7410.609: 0.6338% ( 2) 00:08:37.602 7410.609 - 7461.022: 0.6633% ( 2) 00:08:37.602 7461.022 - 7511.434: 0.7075% ( 3) 00:08:37.602 7511.434 - 7561.846: 0.7370% ( 2) 00:08:37.602 7561.846 - 7612.258: 0.7812% ( 3) 00:08:37.602 7612.258 - 7662.671: 0.8255% ( 3) 00:08:37.602 7662.671 - 7713.083: 0.8697% ( 3) 00:08:37.602 7713.083 - 7763.495: 0.8992% ( 2) 00:08:37.602 7763.495 - 7813.908: 0.9287% ( 2) 00:08:37.602 7864.320 - 7914.732: 0.9434% ( 1) 00:08:37.602 13812.972 - 13913.797: 0.9581% ( 1) 00:08:37.602 14014.622 - 14115.446: 0.9729% ( 1) 00:08:37.602 14115.446 - 14216.271: 1.0318% ( 4) 00:08:37.602 14216.271 - 14317.095: 1.1792% ( 10) 00:08:37.602 14317.095 - 14417.920: 1.4298% ( 17) 00:08:37.602 14417.920 - 14518.745: 1.5772% ( 10) 00:08:37.602 14518.745 - 14619.569: 1.6362% ( 4) 00:08:37.602 14619.569 - 14720.394: 1.6952% ( 4) 00:08:37.602 14720.394 - 14821.218: 1.7394% ( 3) 00:08:37.602 14821.218 - 14922.043: 1.8131% ( 5) 00:08:37.602 14922.043 - 15022.868: 1.9900% ( 12) 00:08:37.602 15022.868 - 15123.692: 2.1079% ( 8) 00:08:37.602 15123.692 - 15224.517: 2.1963% ( 6) 00:08:37.603 15224.517 - 15325.342: 2.4027% ( 14) 00:08:37.603 15325.342 - 15426.166: 2.6680% ( 18) 00:08:37.603 15426.166 - 15526.991: 3.0218% ( 24) 00:08:37.603 15526.991 - 15627.815: 3.2577% ( 16) 00:08:37.603 15627.815 - 15728.640: 3.5672% ( 21) 00:08:37.603 15728.640 - 15829.465: 4.0684% ( 34) 00:08:37.603 15829.465 - 15930.289: 4.8644% ( 54) 00:08:37.603 15930.289 - 16031.114: 5.7783% ( 62) 00:08:37.603 16031.114 - 16131.938: 7.4440% ( 113) 00:08:37.603 16131.938 - 16232.763: 8.7854% ( 91) 00:08:37.603 16232.763 - 16333.588: 10.0973% ( 89) 00:08:37.603 16333.588 - 16434.412: 11.9399% ( 125) 00:08:37.603 16434.412 - 16535.237: 13.7382% ( 122) 00:08:37.603 16535.237 - 16636.062: 15.2712% ( 104) 00:08:37.603 16636.062 - 16736.886: 17.4971% ( 151) 00:08:37.603 16736.886 - 16837.711: 19.3249% ( 124) 00:08:37.603 16837.711 - 16938.535: 20.7547% ( 97) 00:08:37.603 16938.535 - 17039.360: 22.3467% ( 108) 00:08:37.603 17039.360 - 17140.185: 23.7176% ( 93) 00:08:37.603 17140.185 - 17241.009: 24.9263% ( 82) 00:08:37.603 17241.009 - 17341.834: 26.5183% ( 108) 00:08:37.603 17341.834 - 17442.658: 27.6680% ( 78) 00:08:37.603 17442.658 - 17543.483: 28.8768% ( 82) 00:08:37.603 17543.483 - 17644.308: 30.0265% ( 78) 00:08:37.603 17644.308 - 17745.132: 31.1173% ( 74) 00:08:37.603 17745.132 - 17845.957: 32.4735% ( 92) 00:08:37.603 17845.957 - 17946.782: 33.9770% ( 102) 00:08:37.603 17946.782 - 18047.606: 35.9670% ( 135) 00:08:37.603 18047.606 - 18148.431: 37.5000% ( 104) 00:08:37.603 18148.431 - 18249.255: 39.5784% ( 141) 00:08:37.603 18249.255 - 18350.080: 40.9346% ( 92) 00:08:37.603 18350.080 - 18450.905: 42.6150% ( 114) 00:08:37.603 18450.905 - 18551.729: 44.3396% ( 117) 00:08:37.603 18551.729 - 18652.554: 46.4770% ( 145) 00:08:37.603 18652.554 - 18753.378: 48.9239% ( 166) 00:08:37.603 18753.378 - 18854.203: 51.5035% ( 175) 00:08:37.603 18854.203 - 18955.028: 54.1421% ( 179) 00:08:37.603 18955.028 - 19055.852: 56.2647% ( 144) 00:08:37.603 19055.852 - 19156.677: 58.9180% ( 180) 00:08:37.603 19156.677 - 19257.502: 61.3945% ( 168) 00:08:37.603 19257.502 - 19358.326: 63.8119% ( 164) 00:08:37.603 19358.326 - 19459.151: 66.6126% ( 190) 00:08:37.603 19459.151 - 19559.975: 68.8532% ( 152) 00:08:37.603 19559.975 - 19660.800: 71.0200% ( 147) 00:08:37.603 19660.800 - 19761.625: 73.5702% ( 173) 00:08:37.603 19761.625 - 19862.449: 75.4864% ( 130) 00:08:37.603 19862.449 - 19963.274: 77.1669% ( 114) 00:08:37.603 19963.274 - 20064.098: 78.7883% ( 110) 00:08:37.603 20064.098 - 20164.923: 80.3803% ( 108) 00:08:37.603 20164.923 - 20265.748: 81.6627% ( 87) 00:08:37.603 20265.748 - 20366.572: 82.8567% ( 81) 00:08:37.603 20366.572 - 20467.397: 83.7412% ( 60) 00:08:37.603 20467.397 - 20568.222: 84.5224% ( 53) 00:08:37.603 20568.222 - 20669.046: 85.2300% ( 48) 00:08:37.603 20669.046 - 20769.871: 85.8638% ( 43) 00:08:37.603 20769.871 - 20870.695: 86.4976% ( 43) 00:08:37.603 20870.695 - 20971.520: 87.2199% ( 49) 00:08:37.603 20971.520 - 21072.345: 88.2075% ( 67) 00:08:37.603 21072.345 - 21173.169: 89.0772% ( 59) 00:08:37.603 21173.169 - 21273.994: 89.7995% ( 49) 00:08:37.603 21273.994 - 21374.818: 90.4923% ( 47) 00:08:37.603 21374.818 - 21475.643: 91.2883% ( 54) 00:08:37.603 21475.643 - 21576.468: 91.8190% ( 36) 00:08:37.603 21576.468 - 21677.292: 92.3644% ( 37) 00:08:37.603 21677.292 - 21778.117: 92.7919% ( 29) 00:08:37.603 21778.117 - 21878.942: 93.1162% ( 22) 00:08:37.603 21878.942 - 21979.766: 93.4110% ( 20) 00:08:37.603 21979.766 - 22080.591: 93.6468% ( 16) 00:08:37.603 22080.591 - 22181.415: 93.8679% ( 15) 00:08:37.603 22181.415 - 22282.240: 94.1333% ( 18) 00:08:37.603 22282.240 - 22383.065: 94.3544% ( 15) 00:08:37.603 22383.065 - 22483.889: 94.6197% ( 18) 00:08:37.603 22483.889 - 22584.714: 94.9587% ( 23) 00:08:37.603 22584.714 - 22685.538: 95.3272% ( 25) 00:08:37.603 22685.538 - 22786.363: 95.7547% ( 29) 00:08:37.603 22786.363 - 22887.188: 96.2706% ( 35) 00:08:37.603 22887.188 - 22988.012: 96.9782% ( 48) 00:08:37.603 22988.012 - 23088.837: 97.2583% ( 19) 00:08:37.603 23088.837 - 23189.662: 97.4794% ( 15) 00:08:37.603 23189.662 - 23290.486: 97.7152% ( 16) 00:08:37.603 23290.486 - 23391.311: 97.9658% ( 17) 00:08:37.603 23391.311 - 23492.135: 98.0985% ( 9) 00:08:37.603 23492.135 - 23592.960: 98.1132% ( 1) 00:08:37.603 27827.594 - 28029.243: 98.2164% ( 7) 00:08:37.603 28029.243 - 28230.892: 98.4228% ( 14) 00:08:37.603 28230.892 - 28432.542: 98.5702% ( 10) 00:08:37.603 28432.542 - 28634.191: 98.7028% ( 9) 00:08:37.603 28634.191 - 28835.840: 98.8208% ( 8) 00:08:37.603 28835.840 - 29037.489: 98.9534% ( 9) 00:08:37.603 29037.489 - 29239.138: 99.0566% ( 7) 00:08:37.603 38313.354 - 38515.003: 99.1008% ( 3) 00:08:37.603 38716.652 - 38918.302: 99.1303% ( 2) 00:08:37.603 38918.302 - 39119.951: 99.1450% ( 1) 00:08:37.603 39119.951 - 39321.600: 99.2777% ( 9) 00:08:37.603 39321.600 - 39523.249: 99.4104% ( 9) 00:08:37.603 39523.249 - 39724.898: 99.5136% ( 7) 00:08:37.603 39724.898 - 39926.548: 99.6610% ( 10) 00:08:37.603 39926.548 - 40128.197: 99.7936% ( 9) 00:08:37.603 40128.197 - 40329.846: 99.9116% ( 8) 00:08:37.603 40329.846 - 40531.495: 100.0000% ( 6) 00:08:37.603 00:08:37.603 21:50:22 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:37.603 00:08:37.603 real 0m2.519s 00:08:37.603 user 0m2.142s 00:08:37.603 sys 0m0.249s 00:08:37.603 21:50:22 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.603 ************************************ 00:08:37.603 END TEST nvme_perf 00:08:37.603 ************************************ 00:08:37.603 21:50:22 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:37.603 21:50:22 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:37.603 21:50:22 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:37.603 21:50:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.603 21:50:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.603 ************************************ 00:08:37.603 START TEST nvme_hello_world 00:08:37.603 ************************************ 00:08:37.603 21:50:22 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:37.603 Initializing NVMe Controllers 00:08:37.603 Attached to 0000:00:11.0 00:08:37.603 Namespace ID: 1 size: 5GB 00:08:37.603 Attached to 0000:00:13.0 00:08:37.603 Namespace ID: 1 size: 1GB 00:08:37.603 Attached to 0000:00:10.0 00:08:37.603 Namespace ID: 1 size: 6GB 00:08:37.603 Attached to 0000:00:12.0 00:08:37.603 Namespace ID: 1 size: 4GB 00:08:37.603 Namespace ID: 2 size: 4GB 00:08:37.603 Namespace ID: 3 size: 4GB 00:08:37.603 Initialization complete. 00:08:37.603 INFO: using host memory buffer for IO 00:08:37.603 Hello world! 00:08:37.603 INFO: using host memory buffer for IO 00:08:37.603 Hello world! 00:08:37.603 INFO: using host memory buffer for IO 00:08:37.603 Hello world! 00:08:37.603 INFO: using host memory buffer for IO 00:08:37.603 Hello world! 00:08:37.603 INFO: using host memory buffer for IO 00:08:37.603 Hello world! 00:08:37.603 INFO: using host memory buffer for IO 00:08:37.603 Hello world! 00:08:37.603 00:08:37.603 real 0m0.244s 00:08:37.603 user 0m0.067s 00:08:37.603 sys 0m0.124s 00:08:37.603 21:50:22 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.603 21:50:22 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:37.603 ************************************ 00:08:37.603 END TEST nvme_hello_world 00:08:37.603 ************************************ 00:08:37.866 21:50:22 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:37.866 21:50:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.866 21:50:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.866 21:50:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:37.866 ************************************ 00:08:37.866 START TEST nvme_sgl 00:08:37.866 ************************************ 00:08:37.866 21:50:22 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:37.866 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:37.866 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:37.866 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:37.866 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:37.866 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:37.866 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:37.866 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:37.866 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:37.866 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:37.866 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:38.128 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:38.128 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:38.128 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:38.128 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:38.128 NVMe Readv/Writev Request test 00:08:38.128 Attached to 0000:00:11.0 00:08:38.128 Attached to 0000:00:13.0 00:08:38.128 Attached to 0000:00:10.0 00:08:38.128 Attached to 0000:00:12.0 00:08:38.128 0000:00:11.0: build_io_request_2 test passed 00:08:38.128 0000:00:11.0: build_io_request_4 test passed 00:08:38.128 0000:00:11.0: build_io_request_5 test passed 00:08:38.128 0000:00:11.0: build_io_request_6 test passed 00:08:38.128 0000:00:11.0: build_io_request_7 test passed 00:08:38.128 0000:00:11.0: build_io_request_10 test passed 00:08:38.128 0000:00:10.0: build_io_request_2 test passed 00:08:38.128 0000:00:10.0: build_io_request_4 test passed 00:08:38.128 0000:00:10.0: build_io_request_5 test passed 00:08:38.128 0000:00:10.0: build_io_request_6 test passed 00:08:38.128 0000:00:10.0: build_io_request_7 test passed 00:08:38.128 0000:00:10.0: build_io_request_10 test passed 00:08:38.128 Cleaning up... 00:08:38.128 00:08:38.128 real 0m0.309s 00:08:38.128 user 0m0.159s 00:08:38.128 sys 0m0.104s 00:08:38.128 21:50:22 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.128 21:50:22 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:38.128 ************************************ 00:08:38.128 END TEST nvme_sgl 00:08:38.128 ************************************ 00:08:38.128 21:50:22 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:38.128 21:50:22 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.128 21:50:22 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.128 21:50:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.128 ************************************ 00:08:38.128 START TEST nvme_e2edp 00:08:38.128 ************************************ 00:08:38.128 21:50:22 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:38.390 NVMe Write/Read with End-to-End data protection test 00:08:38.390 Attached to 0000:00:11.0 00:08:38.390 Attached to 0000:00:13.0 00:08:38.390 Attached to 0000:00:10.0 00:08:38.390 Attached to 0000:00:12.0 00:08:38.390 Cleaning up... 00:08:38.390 00:08:38.390 real 0m0.250s 00:08:38.390 user 0m0.073s 00:08:38.390 sys 0m0.123s 00:08:38.390 ************************************ 00:08:38.390 END TEST nvme_e2edp 00:08:38.390 ************************************ 00:08:38.390 21:50:23 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.390 21:50:23 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:38.390 21:50:23 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:38.390 21:50:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.390 21:50:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.390 21:50:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.390 ************************************ 00:08:38.390 START TEST nvme_reserve 00:08:38.390 ************************************ 00:08:38.390 21:50:23 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:38.652 ===================================================== 00:08:38.652 NVMe Controller at PCI bus 0, device 17, function 0 00:08:38.652 ===================================================== 00:08:38.652 Reservations: Not Supported 00:08:38.652 ===================================================== 00:08:38.652 NVMe Controller at PCI bus 0, device 19, function 0 00:08:38.652 ===================================================== 00:08:38.652 Reservations: Not Supported 00:08:38.652 ===================================================== 00:08:38.652 NVMe Controller at PCI bus 0, device 16, function 0 00:08:38.652 ===================================================== 00:08:38.652 Reservations: Not Supported 00:08:38.652 ===================================================== 00:08:38.652 NVMe Controller at PCI bus 0, device 18, function 0 00:08:38.652 ===================================================== 00:08:38.652 Reservations: Not Supported 00:08:38.652 Reservation test passed 00:08:38.652 00:08:38.652 real 0m0.212s 00:08:38.652 user 0m0.059s 00:08:38.652 sys 0m0.108s 00:08:38.652 ************************************ 00:08:38.652 21:50:23 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.652 21:50:23 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:38.652 END TEST nvme_reserve 00:08:38.652 ************************************ 00:08:38.652 21:50:23 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:38.652 21:50:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.652 21:50:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.652 21:50:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.652 ************************************ 00:08:38.652 START TEST nvme_err_injection 00:08:38.652 ************************************ 00:08:38.652 21:50:23 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:38.914 NVMe Error Injection test 00:08:38.914 Attached to 0000:00:11.0 00:08:38.914 Attached to 0000:00:13.0 00:08:38.914 Attached to 0000:00:10.0 00:08:38.914 Attached to 0000:00:12.0 00:08:38.914 0000:00:12.0: get features failed as expected 00:08:38.914 0000:00:11.0: get features failed as expected 00:08:38.914 0000:00:13.0: get features failed as expected 00:08:38.914 0000:00:10.0: get features failed as expected 00:08:38.914 0000:00:11.0: get features successfully as expected 00:08:38.914 0000:00:13.0: get features successfully as expected 00:08:38.914 0000:00:10.0: get features successfully as expected 00:08:38.914 0000:00:12.0: get features successfully as expected 00:08:38.914 0000:00:11.0: read failed as expected 00:08:38.914 0000:00:13.0: read failed as expected 00:08:38.914 0000:00:10.0: read failed as expected 00:08:38.914 0000:00:12.0: read failed as expected 00:08:38.914 0000:00:11.0: read successfully as expected 00:08:38.914 0000:00:13.0: read successfully as expected 00:08:38.914 0000:00:10.0: read successfully as expected 00:08:38.914 0000:00:12.0: read successfully as expected 00:08:38.914 Cleaning up... 00:08:38.914 00:08:38.914 real 0m0.218s 00:08:38.914 user 0m0.063s 00:08:38.914 sys 0m0.113s 00:08:38.914 21:50:23 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.914 ************************************ 00:08:38.914 END TEST nvme_err_injection 00:08:38.914 21:50:23 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:38.914 ************************************ 00:08:38.914 21:50:23 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:38.914 21:50:23 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:38.914 21:50:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.914 21:50:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.914 ************************************ 00:08:38.914 START TEST nvme_overhead 00:08:38.914 ************************************ 00:08:38.914 21:50:23 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:40.375 Initializing NVMe Controllers 00:08:40.375 Attached to 0000:00:11.0 00:08:40.375 Attached to 0000:00:13.0 00:08:40.375 Attached to 0000:00:10.0 00:08:40.375 Attached to 0000:00:12.0 00:08:40.375 Initialization complete. Launching workers. 00:08:40.375 submit (in ns) avg, min, max = 15298.9, 12053.8, 78165.4 00:08:40.375 complete (in ns) avg, min, max = 9419.8, 8223.8, 209940.8 00:08:40.375 00:08:40.375 Submit histogram 00:08:40.375 ================ 00:08:40.375 Range in us Cumulative Count 00:08:40.375 12.012 - 12.062: 0.0340% ( 1) 00:08:40.375 12.702 - 12.800: 0.0680% ( 1) 00:08:40.375 12.997 - 13.095: 0.1020% ( 1) 00:08:40.375 13.194 - 13.292: 0.1700% ( 2) 00:08:40.375 13.292 - 13.391: 0.5778% ( 12) 00:08:40.375 13.391 - 13.489: 1.2576% ( 20) 00:08:40.375 13.489 - 13.588: 2.5833% ( 39) 00:08:40.375 13.588 - 13.686: 5.7784% ( 94) 00:08:40.375 13.686 - 13.785: 9.7553% ( 117) 00:08:40.375 13.785 - 13.883: 15.8056% ( 178) 00:08:40.375 13.883 - 13.982: 22.1958% ( 188) 00:08:40.375 13.982 - 14.080: 29.8097% ( 224) 00:08:40.375 14.080 - 14.178: 37.1856% ( 217) 00:08:40.375 14.178 - 14.277: 44.9694% ( 229) 00:08:40.375 14.277 - 14.375: 52.7192% ( 228) 00:08:40.375 14.375 - 14.474: 59.0075% ( 185) 00:08:40.375 14.474 - 14.572: 63.5282% ( 133) 00:08:40.375 14.572 - 14.671: 67.5731% ( 119) 00:08:40.375 14.671 - 14.769: 70.2583% ( 79) 00:08:40.375 14.769 - 14.868: 72.6717% ( 71) 00:08:40.375 14.868 - 14.966: 75.0170% ( 69) 00:08:40.375 14.966 - 15.065: 76.5806% ( 46) 00:08:40.375 15.065 - 15.163: 77.9742% ( 41) 00:08:40.375 15.163 - 15.262: 79.5377% ( 46) 00:08:40.375 15.262 - 15.360: 80.8973% ( 40) 00:08:40.375 15.360 - 15.458: 82.1210% ( 36) 00:08:40.375 15.458 - 15.557: 83.2427% ( 33) 00:08:40.375 15.557 - 15.655: 84.0245% ( 23) 00:08:40.375 15.655 - 15.754: 84.6023% ( 17) 00:08:40.375 15.754 - 15.852: 85.1462% ( 16) 00:08:40.375 15.852 - 15.951: 85.7240% ( 17) 00:08:40.375 15.951 - 16.049: 86.1319% ( 12) 00:08:40.375 16.049 - 16.148: 86.6417% ( 15) 00:08:40.375 16.148 - 16.246: 87.1176% ( 14) 00:08:40.375 16.246 - 16.345: 87.7634% ( 19) 00:08:40.375 16.345 - 16.443: 87.9674% ( 6) 00:08:40.375 16.443 - 16.542: 88.2053% ( 7) 00:08:40.375 16.542 - 16.640: 88.6472% ( 13) 00:08:40.375 16.640 - 16.738: 88.7492% ( 3) 00:08:40.375 16.738 - 16.837: 89.0551% ( 9) 00:08:40.375 16.837 - 16.935: 89.1910% ( 4) 00:08:40.375 16.935 - 17.034: 89.4969% ( 9) 00:08:40.375 17.034 - 17.132: 89.8029% ( 9) 00:08:40.375 17.132 - 17.231: 89.9388% ( 4) 00:08:40.375 17.231 - 17.329: 90.1088% ( 5) 00:08:40.375 17.329 - 17.428: 90.2787% ( 5) 00:08:40.375 17.428 - 17.526: 90.4147% ( 4) 00:08:40.375 17.526 - 17.625: 90.6866% ( 8) 00:08:40.375 17.625 - 17.723: 90.7886% ( 3) 00:08:40.375 17.723 - 17.822: 90.9925% ( 6) 00:08:40.375 17.822 - 17.920: 91.0945% ( 3) 00:08:40.375 17.920 - 18.018: 91.2644% ( 5) 00:08:40.375 18.018 - 18.117: 91.4684% ( 6) 00:08:40.375 18.117 - 18.215: 91.6723% ( 6) 00:08:40.375 18.215 - 18.314: 91.9103% ( 7) 00:08:40.375 18.314 - 18.412: 92.0462% ( 4) 00:08:40.375 18.412 - 18.511: 92.0802% ( 1) 00:08:40.375 18.511 - 18.609: 92.5221% ( 13) 00:08:40.375 18.609 - 18.708: 92.6241% ( 3) 00:08:40.375 18.708 - 18.806: 92.6920% ( 2) 00:08:40.375 18.806 - 18.905: 92.7940% ( 3) 00:08:40.375 18.905 - 19.003: 92.8280% ( 1) 00:08:40.375 19.003 - 19.102: 93.0999% ( 8) 00:08:40.375 19.102 - 19.200: 93.2019% ( 3) 00:08:40.375 19.200 - 19.298: 93.2699% ( 2) 00:08:40.375 19.298 - 19.397: 93.3379% ( 2) 00:08:40.375 19.397 - 19.495: 93.4738% ( 4) 00:08:40.375 19.495 - 19.594: 93.5758% ( 3) 00:08:40.375 19.594 - 19.692: 93.7458% ( 5) 00:08:40.375 19.692 - 19.791: 93.8137% ( 2) 00:08:40.375 19.791 - 19.889: 93.8817% ( 2) 00:08:40.375 19.889 - 19.988: 93.9497% ( 2) 00:08:40.375 19.988 - 20.086: 94.1876% ( 7) 00:08:40.375 20.086 - 20.185: 94.3576% ( 5) 00:08:40.375 20.185 - 20.283: 94.4256% ( 2) 00:08:40.375 20.283 - 20.382: 94.4596% ( 1) 00:08:40.375 20.382 - 20.480: 94.5275% ( 2) 00:08:40.375 20.578 - 20.677: 94.6295% ( 3) 00:08:40.375 20.677 - 20.775: 94.7655% ( 4) 00:08:40.375 20.775 - 20.874: 94.9014% ( 4) 00:08:40.375 20.874 - 20.972: 94.9694% ( 2) 00:08:40.375 20.972 - 21.071: 95.0714% ( 3) 00:08:40.375 21.071 - 21.169: 95.2413% ( 5) 00:08:40.375 21.169 - 21.268: 95.3433% ( 3) 00:08:40.375 21.268 - 21.366: 95.4793% ( 4) 00:08:40.375 21.366 - 21.465: 95.6492% ( 5) 00:08:40.375 21.465 - 21.563: 95.7512% ( 3) 00:08:40.375 21.563 - 21.662: 95.8532% ( 3) 00:08:40.375 21.662 - 21.760: 95.9211% ( 2) 00:08:40.375 21.760 - 21.858: 96.0231% ( 3) 00:08:40.375 21.858 - 21.957: 96.0911% ( 2) 00:08:40.375 21.957 - 22.055: 96.1931% ( 3) 00:08:40.375 22.055 - 22.154: 96.2950% ( 3) 00:08:40.375 22.154 - 22.252: 96.3630% ( 2) 00:08:40.375 22.252 - 22.351: 96.3970% ( 1) 00:08:40.375 22.351 - 22.449: 96.5670% ( 5) 00:08:40.375 22.449 - 22.548: 96.6689% ( 3) 00:08:40.375 22.548 - 22.646: 96.7709% ( 3) 00:08:40.375 22.646 - 22.745: 96.8389% ( 2) 00:08:40.375 22.745 - 22.843: 96.9409% ( 3) 00:08:40.375 22.942 - 23.040: 96.9748% ( 1) 00:08:40.375 23.040 - 23.138: 97.0428% ( 2) 00:08:40.375 23.138 - 23.237: 97.1788% ( 4) 00:08:40.375 23.237 - 23.335: 97.2808% ( 3) 00:08:40.375 23.335 - 23.434: 97.3827% ( 3) 00:08:40.375 23.434 - 23.532: 97.4847% ( 3) 00:08:40.375 23.631 - 23.729: 97.5867% ( 3) 00:08:40.375 23.729 - 23.828: 97.6207% ( 1) 00:08:40.375 23.828 - 23.926: 97.6886% ( 2) 00:08:40.375 23.926 - 24.025: 97.7566% ( 2) 00:08:40.375 24.025 - 24.123: 97.8586% ( 3) 00:08:40.375 24.222 - 24.320: 97.9266% ( 2) 00:08:40.375 24.320 - 24.418: 97.9606% ( 1) 00:08:40.375 24.418 - 24.517: 98.0286% ( 2) 00:08:40.375 24.517 - 24.615: 98.1305% ( 3) 00:08:40.375 24.615 - 24.714: 98.2325% ( 3) 00:08:40.375 24.714 - 24.812: 98.3005% ( 2) 00:08:40.375 24.911 - 25.009: 98.3345% ( 1) 00:08:40.375 25.009 - 25.108: 98.3685% ( 1) 00:08:40.375 25.108 - 25.206: 98.4024% ( 1) 00:08:40.375 25.206 - 25.403: 98.5044% ( 3) 00:08:40.375 25.403 - 25.600: 98.5384% ( 1) 00:08:40.375 25.600 - 25.797: 98.5724% ( 1) 00:08:40.375 25.797 - 25.994: 98.6064% ( 1) 00:08:40.375 25.994 - 26.191: 98.7084% ( 3) 00:08:40.375 26.191 - 26.388: 98.8103% ( 3) 00:08:40.375 26.388 - 26.585: 98.8443% ( 1) 00:08:40.375 26.782 - 26.978: 98.8783% ( 1) 00:08:40.375 27.569 - 27.766: 98.9123% ( 1) 00:08:40.375 28.160 - 28.357: 98.9463% ( 1) 00:08:40.375 28.357 - 28.554: 98.9803% ( 1) 00:08:40.375 28.751 - 28.948: 99.0143% ( 1) 00:08:40.375 28.948 - 29.145: 99.0823% ( 2) 00:08:40.375 29.538 - 29.735: 99.1162% ( 1) 00:08:40.375 29.932 - 30.129: 99.1502% ( 1) 00:08:40.375 30.326 - 30.523: 99.1842% ( 1) 00:08:40.375 30.523 - 30.720: 99.2182% ( 1) 00:08:40.375 31.705 - 31.902: 99.2522% ( 1) 00:08:40.375 32.295 - 32.492: 99.2862% ( 1) 00:08:40.375 32.492 - 32.689: 99.3542% ( 2) 00:08:40.375 32.886 - 33.083: 99.3882% ( 1) 00:08:40.375 33.280 - 33.477: 99.4562% ( 2) 00:08:40.375 34.462 - 34.658: 99.4901% ( 1) 00:08:40.375 35.840 - 36.037: 99.5241% ( 1) 00:08:40.375 37.809 - 38.006: 99.5581% ( 1) 00:08:40.375 38.400 - 38.597: 99.5921% ( 1) 00:08:40.375 39.582 - 39.778: 99.6261% ( 1) 00:08:40.375 40.369 - 40.566: 99.6601% ( 1) 00:08:40.375 42.535 - 42.732: 99.6941% ( 1) 00:08:40.375 44.505 - 44.702: 99.7281% ( 1) 00:08:40.375 50.806 - 51.200: 99.7621% ( 1) 00:08:40.375 56.320 - 56.714: 99.7961% ( 1) 00:08:40.375 58.289 - 58.683: 99.8300% ( 1) 00:08:40.375 68.529 - 68.923: 99.8640% ( 1) 00:08:40.375 71.286 - 71.680: 99.8980% ( 1) 00:08:40.375 76.012 - 76.406: 99.9320% ( 1) 00:08:40.375 77.194 - 77.588: 99.9660% ( 1) 00:08:40.375 77.982 - 78.375: 100.0000% ( 1) 00:08:40.375 00:08:40.375 Complete histogram 00:08:40.375 ================== 00:08:40.375 Range in us Cumulative Count 00:08:40.375 8.222 - 8.271: 0.2379% ( 7) 00:08:40.375 8.271 - 8.320: 1.1217% ( 26) 00:08:40.375 8.320 - 8.369: 2.8552% ( 51) 00:08:40.375 8.369 - 8.418: 6.9680% ( 121) 00:08:40.375 8.418 - 8.468: 11.9986% ( 148) 00:08:40.376 8.468 - 8.517: 19.1706% ( 211) 00:08:40.376 8.517 - 8.566: 25.9347% ( 199) 00:08:40.376 8.566 - 8.615: 32.7328% ( 200) 00:08:40.376 8.615 - 8.665: 39.7689% ( 207) 00:08:40.376 8.665 - 8.714: 46.5330% ( 199) 00:08:40.376 8.714 - 8.763: 52.8212% ( 185) 00:08:40.376 8.763 - 8.812: 57.4099% ( 135) 00:08:40.376 8.812 - 8.862: 62.1346% ( 139) 00:08:40.376 8.862 - 8.911: 66.3154% ( 123) 00:08:40.376 8.911 - 8.960: 69.6805% ( 99) 00:08:40.376 8.960 - 9.009: 72.1618% ( 73) 00:08:40.376 9.009 - 9.058: 74.3712% ( 65) 00:08:40.376 9.058 - 9.108: 76.2746% ( 56) 00:08:40.376 9.108 - 9.157: 78.0082% ( 51) 00:08:40.376 9.157 - 9.206: 79.5377% ( 45) 00:08:40.376 9.206 - 9.255: 80.5235% ( 29) 00:08:40.376 9.255 - 9.305: 81.4752% ( 28) 00:08:40.376 9.305 - 9.354: 82.2230% ( 22) 00:08:40.376 9.354 - 9.403: 82.9708% ( 22) 00:08:40.376 9.403 - 9.452: 83.2767% ( 9) 00:08:40.376 9.452 - 9.502: 83.7525% ( 14) 00:08:40.376 9.502 - 9.551: 84.0585% ( 9) 00:08:40.376 9.551 - 9.600: 84.5003% ( 13) 00:08:40.376 9.600 - 9.649: 84.7043% ( 6) 00:08:40.376 9.649 - 9.698: 85.0102% ( 9) 00:08:40.376 9.698 - 9.748: 85.4861% ( 14) 00:08:40.376 9.748 - 9.797: 85.8260% ( 10) 00:08:40.376 9.797 - 9.846: 86.3018% ( 14) 00:08:40.376 9.846 - 9.895: 86.6757% ( 11) 00:08:40.376 9.895 - 9.945: 86.8797% ( 6) 00:08:40.376 9.945 - 9.994: 87.0156% ( 4) 00:08:40.376 9.994 - 10.043: 87.2196% ( 6) 00:08:40.376 10.043 - 10.092: 87.3215% ( 3) 00:08:40.376 10.092 - 10.142: 87.7294% ( 12) 00:08:40.376 10.142 - 10.191: 88.2393% ( 15) 00:08:40.376 10.191 - 10.240: 88.7152% ( 14) 00:08:40.376 10.240 - 10.289: 89.3610% ( 19) 00:08:40.376 10.289 - 10.338: 90.2447% ( 26) 00:08:40.376 10.338 - 10.388: 90.8906% ( 19) 00:08:40.376 10.388 - 10.437: 91.8083% ( 27) 00:08:40.376 10.437 - 10.486: 92.4541% ( 19) 00:08:40.376 10.486 - 10.535: 93.3039% ( 25) 00:08:40.376 10.535 - 10.585: 93.8817% ( 17) 00:08:40.376 10.585 - 10.634: 94.2556% ( 11) 00:08:40.376 10.634 - 10.683: 94.9354% ( 20) 00:08:40.376 10.683 - 10.732: 95.3093% ( 11) 00:08:40.376 10.732 - 10.782: 95.6832% ( 11) 00:08:40.376 10.782 - 10.831: 95.8532% ( 5) 00:08:40.376 10.831 - 10.880: 96.0571% ( 6) 00:08:40.376 10.880 - 10.929: 96.3290% ( 8) 00:08:40.376 10.929 - 10.978: 96.4650% ( 4) 00:08:40.376 10.978 - 11.028: 96.5330% ( 2) 00:08:40.376 11.028 - 11.077: 96.6349% ( 3) 00:08:40.376 11.077 - 11.126: 96.7709% ( 4) 00:08:40.376 11.126 - 11.175: 96.8389% ( 2) 00:08:40.376 11.175 - 11.225: 96.9069% ( 2) 00:08:40.376 11.225 - 11.274: 96.9409% ( 1) 00:08:40.376 11.274 - 11.323: 96.9748% ( 1) 00:08:40.376 11.422 - 11.471: 97.0088% ( 1) 00:08:40.376 11.471 - 11.520: 97.0768% ( 2) 00:08:40.376 11.569 - 11.618: 97.1108% ( 1) 00:08:40.376 11.618 - 11.668: 97.1448% ( 1) 00:08:40.376 11.717 - 11.766: 97.1788% ( 1) 00:08:40.376 11.766 - 11.815: 97.2468% ( 2) 00:08:40.376 11.815 - 11.865: 97.2808% ( 1) 00:08:40.376 12.111 - 12.160: 97.3148% ( 1) 00:08:40.376 12.258 - 12.308: 97.3487% ( 1) 00:08:40.376 12.406 - 12.455: 97.3827% ( 1) 00:08:40.376 12.702 - 12.800: 97.4167% ( 1) 00:08:40.376 13.292 - 13.391: 97.4507% ( 1) 00:08:40.376 14.868 - 14.966: 97.4847% ( 1) 00:08:40.376 15.262 - 15.360: 97.5187% ( 1) 00:08:40.376 15.360 - 15.458: 97.5867% ( 2) 00:08:40.376 15.458 - 15.557: 97.6207% ( 1) 00:08:40.376 15.557 - 15.655: 97.6886% ( 2) 00:08:40.376 15.655 - 15.754: 97.7566% ( 2) 00:08:40.376 15.754 - 15.852: 97.7906% ( 1) 00:08:40.376 15.852 - 15.951: 97.8586% ( 2) 00:08:40.376 16.148 - 16.246: 97.9266% ( 2) 00:08:40.376 16.246 - 16.345: 97.9946% ( 2) 00:08:40.376 16.345 - 16.443: 98.0625% ( 2) 00:08:40.376 16.443 - 16.542: 98.1305% ( 2) 00:08:40.376 16.542 - 16.640: 98.1645% ( 1) 00:08:40.376 16.640 - 16.738: 98.2665% ( 3) 00:08:40.376 16.738 - 16.837: 98.3005% ( 1) 00:08:40.376 16.935 - 17.034: 98.3685% ( 2) 00:08:40.376 17.034 - 17.132: 98.4364% ( 2) 00:08:40.376 17.132 - 17.231: 98.5384% ( 3) 00:08:40.376 17.231 - 17.329: 98.6064% ( 2) 00:08:40.376 17.329 - 17.428: 98.6744% ( 2) 00:08:40.376 17.428 - 17.526: 98.7424% ( 2) 00:08:40.376 17.526 - 17.625: 98.7763% ( 1) 00:08:40.376 17.723 - 17.822: 98.8103% ( 1) 00:08:40.376 17.822 - 17.920: 98.8783% ( 2) 00:08:40.376 17.920 - 18.018: 98.9123% ( 1) 00:08:40.376 18.117 - 18.215: 98.9803% ( 2) 00:08:40.376 18.412 - 18.511: 99.0483% ( 2) 00:08:40.376 18.511 - 18.609: 99.1162% ( 2) 00:08:40.376 19.200 - 19.298: 99.1502% ( 1) 00:08:40.376 19.594 - 19.692: 99.1842% ( 1) 00:08:40.376 20.086 - 20.185: 99.2182% ( 1) 00:08:40.376 20.972 - 21.071: 99.2522% ( 1) 00:08:40.376 22.548 - 22.646: 99.2862% ( 1) 00:08:40.376 22.942 - 23.040: 99.3202% ( 1) 00:08:40.376 23.237 - 23.335: 99.3542% ( 1) 00:08:40.376 23.631 - 23.729: 99.4222% ( 2) 00:08:40.376 25.600 - 25.797: 99.4562% ( 1) 00:08:40.376 27.175 - 27.372: 99.4901% ( 1) 00:08:40.376 27.372 - 27.569: 99.5241% ( 1) 00:08:40.376 27.963 - 28.160: 99.5581% ( 1) 00:08:40.376 28.751 - 28.948: 99.5921% ( 1) 00:08:40.376 28.948 - 29.145: 99.6261% ( 1) 00:08:40.376 29.538 - 29.735: 99.6601% ( 1) 00:08:40.376 30.917 - 31.114: 99.6941% ( 1) 00:08:40.376 31.114 - 31.311: 99.7281% ( 1) 00:08:40.376 32.098 - 32.295: 99.7621% ( 1) 00:08:40.376 38.991 - 39.188: 99.7961% ( 1) 00:08:40.376 45.489 - 45.686: 99.8300% ( 1) 00:08:40.376 83.495 - 83.889: 99.8640% ( 1) 00:08:40.376 100.825 - 101.612: 99.8980% ( 1) 00:08:40.376 105.551 - 106.338: 99.9320% ( 1) 00:08:40.376 155.175 - 155.963: 99.9660% ( 1) 00:08:40.376 209.526 - 211.102: 100.0000% ( 1) 00:08:40.376 00:08:40.376 00:08:40.376 real 0m1.202s 00:08:40.376 user 0m1.067s 00:08:40.376 sys 0m0.090s 00:08:40.376 21:50:24 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.376 21:50:24 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:40.376 ************************************ 00:08:40.376 END TEST nvme_overhead 00:08:40.376 ************************************ 00:08:40.376 21:50:24 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:40.376 21:50:24 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:40.376 21:50:24 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.376 21:50:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.376 ************************************ 00:08:40.376 START TEST nvme_arbitration 00:08:40.376 ************************************ 00:08:40.376 21:50:24 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:43.679 Initializing NVMe Controllers 00:08:43.679 Attached to 0000:00:11.0 00:08:43.679 Attached to 0000:00:13.0 00:08:43.679 Attached to 0000:00:10.0 00:08:43.679 Attached to 0000:00:12.0 00:08:43.679 Associating QEMU NVMe Ctrl (12341 ) with lcore 0 00:08:43.679 Associating QEMU NVMe Ctrl (12343 ) with lcore 1 00:08:43.679 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:08:43.679 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:43.679 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:43.679 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:43.679 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:43.679 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:43.679 Initialization complete. Launching workers. 00:08:43.679 Starting thread on core 1 with urgent priority queue 00:08:43.679 Starting thread on core 2 with urgent priority queue 00:08:43.679 Starting thread on core 3 with urgent priority queue 00:08:43.679 Starting thread on core 0 with urgent priority queue 00:08:43.679 QEMU NVMe Ctrl (12341 ) core 0: 3157.33 IO/s 31.67 secs/100000 ios 00:08:43.679 QEMU NVMe Ctrl (12342 ) core 0: 3157.33 IO/s 31.67 secs/100000 ios 00:08:43.679 QEMU NVMe Ctrl (12343 ) core 1: 3114.67 IO/s 32.11 secs/100000 ios 00:08:43.679 QEMU NVMe Ctrl (12342 ) core 1: 3114.67 IO/s 32.11 secs/100000 ios 00:08:43.679 QEMU NVMe Ctrl (12340 ) core 2: 2944.00 IO/s 33.97 secs/100000 ios 00:08:43.679 QEMU NVMe Ctrl (12342 ) core 3: 3050.67 IO/s 32.78 secs/100000 ios 00:08:43.680 ======================================================== 00:08:43.680 00:08:43.680 00:08:43.680 real 0m3.257s 00:08:43.680 user 0m8.966s 00:08:43.680 sys 0m0.151s 00:08:43.680 21:50:28 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.680 ************************************ 00:08:43.680 END TEST nvme_arbitration 00:08:43.680 ************************************ 00:08:43.680 21:50:28 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:43.680 21:50:28 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:43.680 21:50:28 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:43.680 21:50:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.680 21:50:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.680 ************************************ 00:08:43.680 START TEST nvme_single_aen 00:08:43.680 ************************************ 00:08:43.680 21:50:28 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:43.948 Asynchronous Event Request test 00:08:43.948 Attached to 0000:00:11.0 00:08:43.948 Attached to 0000:00:13.0 00:08:43.948 Attached to 0000:00:10.0 00:08:43.948 Attached to 0000:00:12.0 00:08:43.948 Reset controller to setup AER completions for this process 00:08:43.948 Registering asynchronous event callbacks... 00:08:43.948 Getting orig temperature thresholds of all controllers 00:08:43.948 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:43.948 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:43.948 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:43.948 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:43.948 Setting all controllers temperature threshold low to trigger AER 00:08:43.948 Waiting for all controllers temperature threshold to be set lower 00:08:43.948 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:43.948 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:43.948 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:43.948 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:43.948 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:43.948 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:43.948 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:43.948 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:43.948 Waiting for all controllers to trigger AER and reset threshold 00:08:43.948 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:43.948 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:43.948 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:43.948 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:43.948 Cleaning up... 00:08:43.948 00:08:43.948 real 0m0.224s 00:08:43.948 user 0m0.075s 00:08:43.948 sys 0m0.099s 00:08:43.948 21:50:28 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.948 ************************************ 00:08:43.948 END TEST nvme_single_aen 00:08:43.948 ************************************ 00:08:43.948 21:50:28 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:43.948 21:50:28 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:43.948 21:50:28 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.948 21:50:28 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.948 21:50:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:43.948 ************************************ 00:08:43.948 START TEST nvme_doorbell_aers 00:08:43.948 ************************************ 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:43.948 21:50:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:44.222 [2024-09-30 21:50:28.880769] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:08:54.225 Executing: test_write_invalid_db 00:08:54.225 Waiting for AER completion... 00:08:54.225 Failure: test_write_invalid_db 00:08:54.225 00:08:54.225 Executing: test_invalid_db_write_overflow_sq 00:08:54.225 Waiting for AER completion... 00:08:54.225 Failure: test_invalid_db_write_overflow_sq 00:08:54.225 00:08:54.225 Executing: test_invalid_db_write_overflow_cq 00:08:54.225 Waiting for AER completion... 00:08:54.225 Failure: test_invalid_db_write_overflow_cq 00:08:54.225 00:08:54.225 21:50:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:54.225 21:50:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:54.225 [2024-09-30 21:50:38.892813] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:04.201 Executing: test_write_invalid_db 00:09:04.201 Waiting for AER completion... 00:09:04.201 Failure: test_write_invalid_db 00:09:04.201 00:09:04.201 Executing: test_invalid_db_write_overflow_sq 00:09:04.201 Waiting for AER completion... 00:09:04.201 Failure: test_invalid_db_write_overflow_sq 00:09:04.201 00:09:04.201 Executing: test_invalid_db_write_overflow_cq 00:09:04.201 Waiting for AER completion... 00:09:04.201 Failure: test_invalid_db_write_overflow_cq 00:09:04.201 00:09:04.201 21:50:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:04.201 21:50:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:04.201 [2024-09-30 21:50:48.945232] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:14.170 Executing: test_write_invalid_db 00:09:14.170 Waiting for AER completion... 00:09:14.170 Failure: test_write_invalid_db 00:09:14.170 00:09:14.170 Executing: test_invalid_db_write_overflow_sq 00:09:14.170 Waiting for AER completion... 00:09:14.170 Failure: test_invalid_db_write_overflow_sq 00:09:14.170 00:09:14.170 Executing: test_invalid_db_write_overflow_cq 00:09:14.170 Waiting for AER completion... 00:09:14.170 Failure: test_invalid_db_write_overflow_cq 00:09:14.170 00:09:14.170 21:50:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:14.170 21:50:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:14.170 [2024-09-30 21:50:58.948772] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.132 Executing: test_write_invalid_db 00:09:24.132 Waiting for AER completion... 00:09:24.132 Failure: test_write_invalid_db 00:09:24.132 00:09:24.132 Executing: test_invalid_db_write_overflow_sq 00:09:24.132 Waiting for AER completion... 00:09:24.132 Failure: test_invalid_db_write_overflow_sq 00:09:24.132 00:09:24.132 Executing: test_invalid_db_write_overflow_cq 00:09:24.132 Waiting for AER completion... 00:09:24.132 Failure: test_invalid_db_write_overflow_cq 00:09:24.132 00:09:24.132 00:09:24.132 real 0m40.189s 00:09:24.133 user 0m34.037s 00:09:24.133 sys 0m5.740s 00:09:24.133 21:51:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.133 21:51:08 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:24.133 ************************************ 00:09:24.133 END TEST nvme_doorbell_aers 00:09:24.133 ************************************ 00:09:24.133 21:51:08 nvme -- nvme/nvme.sh@97 -- # uname 00:09:24.133 21:51:08 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:24.133 21:51:08 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:24.133 21:51:08 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:24.133 21:51:08 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.133 21:51:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.133 ************************************ 00:09:24.133 START TEST nvme_multi_aen 00:09:24.133 ************************************ 00:09:24.133 21:51:08 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:24.390 [2024-09-30 21:51:09.040834] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.040919] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.040939] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.042351] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.042393] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.042407] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.043504] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.043541] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.043559] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.044562] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.044599] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 [2024-09-30 21:51:09.044616] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 76496) is not found. Dropping the request. 00:09:24.390 Child process pid: 77016 00:09:24.647 [Child] Asynchronous Event Request test 00:09:24.647 [Child] Attached to 0000:00:11.0 00:09:24.647 [Child] Attached to 0000:00:13.0 00:09:24.647 [Child] Attached to 0000:00:10.0 00:09:24.647 [Child] Attached to 0000:00:12.0 00:09:24.647 [Child] Registering asynchronous event callbacks... 00:09:24.647 [Child] Getting orig temperature thresholds of all controllers 00:09:24.647 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.647 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.647 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.647 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.647 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:24.647 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.647 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.647 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.647 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.647 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.647 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.647 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.647 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.647 [Child] Cleaning up... 00:09:24.647 Asynchronous Event Request test 00:09:24.647 Attached to 0000:00:11.0 00:09:24.647 Attached to 0000:00:13.0 00:09:24.647 Attached to 0000:00:10.0 00:09:24.647 Attached to 0000:00:12.0 00:09:24.647 Reset controller to setup AER completions for this process 00:09:24.647 Registering asynchronous event callbacks... 00:09:24.647 Getting orig temperature thresholds of all controllers 00:09:24.647 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.647 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.647 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.647 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:24.647 Setting all controllers temperature threshold low to trigger AER 00:09:24.647 Waiting for all controllers temperature threshold to be set lower 00:09:24.648 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.648 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:24.648 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.648 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:24.648 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.648 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:24.648 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:24.648 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:24.648 Waiting for all controllers to trigger AER and reset threshold 00:09:24.648 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.648 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.648 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.648 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:24.648 Cleaning up... 00:09:24.648 00:09:24.648 real 0m0.448s 00:09:24.648 user 0m0.130s 00:09:24.648 sys 0m0.205s 00:09:24.648 21:51:09 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.648 21:51:09 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:24.648 ************************************ 00:09:24.648 END TEST nvme_multi_aen 00:09:24.648 ************************************ 00:09:24.648 21:51:09 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:24.648 21:51:09 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:24.648 21:51:09 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.648 21:51:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.648 ************************************ 00:09:24.648 START TEST nvme_startup 00:09:24.648 ************************************ 00:09:24.648 21:51:09 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:24.905 Initializing NVMe Controllers 00:09:24.905 Attached to 0000:00:11.0 00:09:24.905 Attached to 0000:00:13.0 00:09:24.905 Attached to 0000:00:10.0 00:09:24.905 Attached to 0000:00:12.0 00:09:24.905 Initialization complete. 00:09:24.905 Time used:161768.547 (us). 00:09:24.905 00:09:24.905 real 0m0.229s 00:09:24.905 user 0m0.059s 00:09:24.905 sys 0m0.120s 00:09:24.905 21:51:09 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.905 21:51:09 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:24.905 ************************************ 00:09:24.905 END TEST nvme_startup 00:09:24.905 ************************************ 00:09:24.905 21:51:09 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:24.905 21:51:09 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:24.905 21:51:09 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.905 21:51:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.905 ************************************ 00:09:24.905 START TEST nvme_multi_secondary 00:09:24.905 ************************************ 00:09:24.905 21:51:09 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:09:24.905 21:51:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=77067 00:09:24.905 21:51:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:24.905 21:51:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=77068 00:09:24.905 21:51:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:24.905 21:51:09 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:28.309 Initializing NVMe Controllers 00:09:28.309 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:28.309 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:28.309 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.309 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:28.309 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:28.309 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:28.309 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:28.309 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:28.309 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:28.309 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:28.309 Initialization complete. Launching workers. 00:09:28.309 ======================================================== 00:09:28.309 Latency(us) 00:09:28.309 Device Information : IOPS MiB/s Average min max 00:09:28.309 PCIE (0000:00:11.0) NSID 1 from core 2: 3277.36 12.80 4881.62 989.25 12326.98 00:09:28.309 PCIE (0000:00:13.0) NSID 1 from core 2: 3277.36 12.80 4881.51 985.33 12994.14 00:09:28.309 PCIE (0000:00:10.0) NSID 1 from core 2: 3277.36 12.80 4879.95 942.11 13503.27 00:09:28.309 PCIE (0000:00:12.0) NSID 1 from core 2: 3277.36 12.80 4882.65 907.33 16359.67 00:09:28.309 PCIE (0000:00:12.0) NSID 2 from core 2: 3277.36 12.80 4882.79 961.02 12417.86 00:09:28.309 PCIE (0000:00:12.0) NSID 3 from core 2: 3277.36 12.80 4882.67 985.51 12538.72 00:09:28.309 ======================================================== 00:09:28.309 Total : 19664.17 76.81 4881.86 907.33 16359.67 00:09:28.309 00:09:28.309 21:51:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 77067 00:09:28.309 Initializing NVMe Controllers 00:09:28.309 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:28.309 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:28.309 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.309 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:28.309 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:28.309 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:28.309 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:28.309 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:28.309 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:28.309 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:28.309 Initialization complete. Launching workers. 00:09:28.309 ======================================================== 00:09:28.309 Latency(us) 00:09:28.309 Device Information : IOPS MiB/s Average min max 00:09:28.309 PCIE (0000:00:11.0) NSID 1 from core 1: 7608.52 29.72 2102.48 831.74 5719.16 00:09:28.309 PCIE (0000:00:13.0) NSID 1 from core 1: 7608.52 29.72 2102.52 822.06 5493.28 00:09:28.309 PCIE (0000:00:10.0) NSID 1 from core 1: 7608.52 29.72 2101.45 807.91 6157.79 00:09:28.309 PCIE (0000:00:12.0) NSID 1 from core 1: 7608.52 29.72 2102.55 830.60 6609.97 00:09:28.309 PCIE (0000:00:12.0) NSID 2 from core 1: 7608.52 29.72 2102.52 809.18 6083.08 00:09:28.309 PCIE (0000:00:12.0) NSID 3 from core 1: 7608.52 29.72 2102.54 862.44 5899.71 00:09:28.309 ======================================================== 00:09:28.309 Total : 45651.10 178.32 2102.34 807.91 6609.97 00:09:28.309 00:09:30.210 Initializing NVMe Controllers 00:09:30.210 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:30.210 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:30.210 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:30.210 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:30.210 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:30.210 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:30.210 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:30.210 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:30.210 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:30.210 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:30.210 Initialization complete. Launching workers. 00:09:30.210 ======================================================== 00:09:30.210 Latency(us) 00:09:30.210 Device Information : IOPS MiB/s Average min max 00:09:30.210 PCIE (0000:00:11.0) NSID 1 from core 0: 10664.07 41.66 1499.99 634.00 8412.36 00:09:30.210 PCIE (0000:00:13.0) NSID 1 from core 0: 10657.87 41.63 1500.84 593.55 8052.09 00:09:30.210 PCIE (0000:00:10.0) NSID 1 from core 0: 10692.87 41.77 1494.99 571.50 8017.16 00:09:30.210 PCIE (0000:00:12.0) NSID 1 from core 0: 10692.87 41.77 1495.87 590.75 7810.80 00:09:30.210 PCIE (0000:00:12.0) NSID 2 from core 0: 10692.87 41.77 1495.84 441.24 9077.51 00:09:30.210 PCIE (0000:00:12.0) NSID 3 from core 0: 10692.47 41.77 1495.86 381.23 8598.39 00:09:30.210 ======================================================== 00:09:30.210 Total : 64093.02 250.36 1497.23 381.23 9077.51 00:09:30.210 00:09:30.210 21:51:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 77068 00:09:30.210 21:51:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=77137 00:09:30.210 21:51:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:30.210 21:51:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:30.210 21:51:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=77138 00:09:30.210 21:51:14 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:33.559 Initializing NVMe Controllers 00:09:33.559 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:33.559 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:33.559 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:33.559 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:33.559 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:33.559 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:33.559 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:33.559 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:33.559 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:33.559 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:33.559 Initialization complete. Launching workers. 00:09:33.559 ======================================================== 00:09:33.559 Latency(us) 00:09:33.559 Device Information : IOPS MiB/s Average min max 00:09:33.559 PCIE (0000:00:11.0) NSID 1 from core 0: 8031.89 31.37 1991.66 707.83 10440.38 00:09:33.559 PCIE (0000:00:13.0) NSID 1 from core 0: 8031.89 31.37 1991.79 702.89 10477.21 00:09:33.559 PCIE (0000:00:10.0) NSID 1 from core 0: 8031.89 31.37 1990.82 680.74 10534.80 00:09:33.559 PCIE (0000:00:12.0) NSID 1 from core 0: 8031.89 31.37 1991.74 710.49 10776.23 00:09:33.559 PCIE (0000:00:12.0) NSID 2 from core 0: 8031.89 31.37 1991.66 716.93 11135.74 00:09:33.559 PCIE (0000:00:12.0) NSID 3 from core 0: 8031.89 31.37 1991.67 708.81 10291.85 00:09:33.559 ======================================================== 00:09:33.559 Total : 48191.33 188.25 1991.56 680.74 11135.74 00:09:33.559 00:09:33.559 Initializing NVMe Controllers 00:09:33.559 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:33.559 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:33.559 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:33.559 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:33.559 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:33.559 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:33.559 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:33.559 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:33.559 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:33.559 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:33.559 Initialization complete. Launching workers. 00:09:33.559 ======================================================== 00:09:33.559 Latency(us) 00:09:33.559 Device Information : IOPS MiB/s Average min max 00:09:33.559 PCIE (0000:00:11.0) NSID 1 from core 1: 7630.28 29.81 2096.45 746.42 10384.20 00:09:33.559 PCIE (0000:00:13.0) NSID 1 from core 1: 7630.28 29.81 2096.50 743.52 10180.66 00:09:33.559 PCIE (0000:00:10.0) NSID 1 from core 1: 7630.28 29.81 2095.43 715.75 10298.09 00:09:33.559 PCIE (0000:00:12.0) NSID 1 from core 1: 7630.28 29.81 2096.55 735.06 10338.11 00:09:33.559 PCIE (0000:00:12.0) NSID 2 from core 1: 7630.28 29.81 2096.54 755.79 10377.09 00:09:33.559 PCIE (0000:00:12.0) NSID 3 from core 1: 7630.28 29.81 2096.54 747.01 10383.31 00:09:33.559 ======================================================== 00:09:33.559 Total : 45781.67 178.83 2096.34 715.75 10384.20 00:09:33.559 00:09:35.462 Initializing NVMe Controllers 00:09:35.462 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:35.462 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:35.462 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:35.462 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:35.462 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:35.462 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:35.462 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:35.462 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:35.462 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:35.462 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:35.462 Initialization complete. Launching workers. 00:09:35.462 ======================================================== 00:09:35.462 Latency(us) 00:09:35.462 Device Information : IOPS MiB/s Average min max 00:09:35.462 PCIE (0000:00:11.0) NSID 1 from core 2: 4479.20 17.50 3571.52 742.54 23087.39 00:09:35.462 PCIE (0000:00:13.0) NSID 1 from core 2: 4479.20 17.50 3571.51 746.36 23245.56 00:09:35.462 PCIE (0000:00:10.0) NSID 1 from core 2: 4479.20 17.50 3570.02 735.30 23412.03 00:09:35.462 PCIE (0000:00:12.0) NSID 1 from core 2: 4479.20 17.50 3571.19 753.48 23438.98 00:09:35.462 PCIE (0000:00:12.0) NSID 2 from core 2: 4479.20 17.50 3571.12 549.80 23316.54 00:09:35.462 PCIE (0000:00:12.0) NSID 3 from core 2: 4479.20 17.50 3570.88 444.43 22364.90 00:09:35.462 ======================================================== 00:09:35.462 Total : 26875.21 104.98 3571.04 444.43 23438.98 00:09:35.462 00:09:35.462 21:51:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 77137 00:09:35.462 21:51:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 77138 00:09:35.462 00:09:35.462 real 0m10.632s 00:09:35.462 user 0m18.243s 00:09:35.462 sys 0m0.581s 00:09:35.462 21:51:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.462 21:51:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:35.462 ************************************ 00:09:35.462 END TEST nvme_multi_secondary 00:09:35.462 ************************************ 00:09:35.462 21:51:20 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:35.462 21:51:20 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:35.462 21:51:20 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/76088 ]] 00:09:35.462 21:51:20 nvme -- common/autotest_common.sh@1090 -- # kill 76088 00:09:35.462 21:51:20 nvme -- common/autotest_common.sh@1091 -- # wait 76088 00:09:35.462 [2024-09-30 21:51:20.250970] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.251063] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.251095] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.251119] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.251844] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.251906] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.251930] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.251950] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.252670] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.252731] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.252759] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.252781] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.253440] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.253496] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.253527] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.462 [2024-09-30 21:51:20.253547] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 77015) is not found. Dropping the request. 00:09:35.721 21:51:20 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:35.721 21:51:20 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:35.721 21:51:20 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:35.721 21:51:20 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:35.721 21:51:20 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.721 21:51:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.721 ************************************ 00:09:35.721 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:35.721 ************************************ 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:35.721 * Looking for test storage... 00:09:35.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.721 --rc genhtml_branch_coverage=1 00:09:35.721 --rc genhtml_function_coverage=1 00:09:35.721 --rc genhtml_legend=1 00:09:35.721 --rc geninfo_all_blocks=1 00:09:35.721 --rc geninfo_unexecuted_blocks=1 00:09:35.721 00:09:35.721 ' 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.721 --rc genhtml_branch_coverage=1 00:09:35.721 --rc genhtml_function_coverage=1 00:09:35.721 --rc genhtml_legend=1 00:09:35.721 --rc geninfo_all_blocks=1 00:09:35.721 --rc geninfo_unexecuted_blocks=1 00:09:35.721 00:09:35.721 ' 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.721 --rc genhtml_branch_coverage=1 00:09:35.721 --rc genhtml_function_coverage=1 00:09:35.721 --rc genhtml_legend=1 00:09:35.721 --rc geninfo_all_blocks=1 00:09:35.721 --rc geninfo_unexecuted_blocks=1 00:09:35.721 00:09:35.721 ' 00:09:35.721 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.721 --rc genhtml_branch_coverage=1 00:09:35.721 --rc genhtml_function_coverage=1 00:09:35.721 --rc genhtml_legend=1 00:09:35.721 --rc geninfo_all_blocks=1 00:09:35.721 --rc geninfo_unexecuted_blocks=1 00:09:35.722 00:09:35.722 ' 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:35.722 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=77304 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 77304 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 77304 ']' 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:35.980 21:51:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:35.980 [2024-09-30 21:51:20.614966] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:09:35.980 [2024-09-30 21:51:20.615092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77304 ] 00:09:35.980 [2024-09-30 21:51:20.754490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:35.980 [2024-09-30 21:51:20.767230] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:36.238 [2024-09-30 21:51:20.805134] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:36.238 [2024-09-30 21:51:20.805509] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:36.238 [2024-09-30 21:51:20.805625] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.238 [2024-09-30 21:51:20.805717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:36.804 nvme0n1 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_j5AoS.txt 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:36.804 true 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727733081 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=77327 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:36.804 21:51:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:39.332 [2024-09-30 21:51:23.550656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:39.332 [2024-09-30 21:51:23.550920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:39.332 [2024-09-30 21:51:23.550944] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:39.332 [2024-09-30 21:51:23.550959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:39.332 [2024-09-30 21:51:23.554982] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:39.332 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 77327 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 77327 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 77327 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_j5AoS.txt 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_j5AoS.txt 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 77304 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 77304 ']' 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 77304 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77304 00:09:39.332 killing process with pid 77304 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77304' 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 77304 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 77304 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:39.332 00:09:39.332 real 0m3.600s 00:09:39.332 user 0m12.819s 00:09:39.332 sys 0m0.472s 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.332 ************************************ 00:09:39.332 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:39.332 ************************************ 00:09:39.332 21:51:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:39.332 21:51:23 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:39.332 21:51:23 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:39.332 21:51:23 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.332 21:51:23 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.332 21:51:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.332 ************************************ 00:09:39.332 START TEST nvme_fio 00:09:39.332 ************************************ 00:09:39.332 21:51:23 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:39.332 21:51:23 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:39.332 21:51:23 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:39.332 21:51:23 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:39.332 21:51:23 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:39.332 21:51:23 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:39.332 21:51:23 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.332 21:51:23 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.332 21:51:23 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:39.332 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:39.332 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.332 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:39.332 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:39.332 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:39.332 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:39.332 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:39.591 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:39.591 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:39.850 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:39.850 21:51:24 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:39.850 21:51:24 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:39.850 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:39.850 fio-3.35 00:09:39.850 Starting 1 thread 00:09:46.398 00:09:46.398 test: (groupid=0, jobs=1): err= 0: pid=77450: Mon Sep 30 21:51:30 2024 00:09:46.398 read: IOPS=21.7k, BW=84.7MiB/s (88.8MB/s)(169MiB/2001msec) 00:09:46.398 slat (nsec): min=3366, max=70670, avg=5188.53, stdev=2327.65 00:09:46.398 clat (usec): min=321, max=11794, avg=2944.86, stdev=897.98 00:09:46.398 lat (usec): min=326, max=11845, avg=2950.05, stdev=899.27 00:09:46.398 clat percentiles (usec): 00:09:46.398 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:46.398 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2704], 00:09:46.398 | 70.00th=[ 2868], 80.00th=[ 3195], 90.00th=[ 4146], 95.00th=[ 5014], 00:09:46.398 | 99.00th=[ 6259], 99.50th=[ 6915], 99.90th=[ 8586], 99.95th=[10028], 00:09:46.398 | 99.99th=[11600] 00:09:46.398 bw ( KiB/s): min=84736, max=88240, per=99.16%, avg=85997.33, stdev=1947.25, samples=3 00:09:46.398 iops : min=21184, max=22060, avg=21499.33, stdev=486.81, samples=3 00:09:46.398 write: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(168MiB/2001msec); 0 zone resets 00:09:46.398 slat (usec): min=3, max=432, avg= 5.37, stdev= 3.17 00:09:46.398 clat (usec): min=198, max=11679, avg=2960.12, stdev=907.31 00:09:46.398 lat (usec): min=203, max=11693, avg=2965.49, stdev=908.60 00:09:46.398 clat percentiles (usec): 00:09:46.398 | 1.00th=[ 1975], 5.00th=[ 2245], 10.00th=[ 2343], 20.00th=[ 2442], 00:09:46.398 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2638], 60.00th=[ 2737], 00:09:46.398 | 70.00th=[ 2900], 80.00th=[ 3228], 90.00th=[ 4146], 95.00th=[ 5014], 00:09:46.398 | 99.00th=[ 6325], 99.50th=[ 6915], 99.90th=[ 8717], 99.95th=[10290], 00:09:46.398 | 99.99th=[11338] 00:09:46.398 bw ( KiB/s): min=84584, max=88816, per=100.00%, avg=86162.67, stdev=2311.63, samples=3 00:09:46.398 iops : min=21146, max=22204, avg=21540.67, stdev=577.91, samples=3 00:09:46.398 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:09:46.398 lat (msec) : 2=1.04%, 4=87.77%, 10=11.09%, 20=0.05% 00:09:46.398 cpu : usr=98.95%, sys=0.20%, ctx=2, majf=0, minf=624 00:09:46.398 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:46.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:46.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:46.398 issued rwts: total=43383,43068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:46.398 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:46.398 00:09:46.398 Run status group 0 (all jobs): 00:09:46.398 READ: bw=84.7MiB/s (88.8MB/s), 84.7MiB/s-84.7MiB/s (88.8MB/s-88.8MB/s), io=169MiB (178MB), run=2001-2001msec 00:09:46.398 WRITE: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:46.398 ----------------------------------------------------- 00:09:46.399 Suppressions used: 00:09:46.399 count bytes template 00:09:46.399 1 32 /usr/src/fio/parse.c 00:09:46.399 1 8 libtcmalloc_minimal.so 00:09:46.399 ----------------------------------------------------- 00:09:46.399 00:09:46.399 21:51:30 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:46.399 21:51:30 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:46.399 21:51:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:46.399 21:51:30 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:46.399 21:51:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:46.399 21:51:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:46.399 21:51:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:46.399 21:51:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:46.399 21:51:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:46.399 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:46.399 fio-3.35 00:09:46.399 Starting 1 thread 00:09:53.015 00:09:53.015 test: (groupid=0, jobs=1): err= 0: pid=77506: Mon Sep 30 21:51:36 2024 00:09:53.015 read: IOPS=19.3k, BW=75.3MiB/s (79.0MB/s)(151MiB/2001msec) 00:09:53.015 slat (nsec): min=4199, max=72974, avg=5630.54, stdev=2905.18 00:09:53.015 clat (usec): min=391, max=24649, avg=3278.54, stdev=1154.00 00:09:53.015 lat (usec): min=396, max=24654, avg=3284.17, stdev=1155.15 00:09:53.015 clat percentiles (usec): 00:09:53.015 | 1.00th=[ 1991], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:53.015 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3064], 00:09:53.015 | 70.00th=[ 3392], 80.00th=[ 3916], 90.00th=[ 4883], 95.00th=[ 5538], 00:09:53.015 | 99.00th=[ 6980], 99.50th=[ 7570], 99.90th=[ 9503], 99.95th=[18744], 00:09:53.015 | 99.99th=[20055] 00:09:53.015 bw ( KiB/s): min=75591, max=79561, per=100.00%, avg=77251.00, stdev=2063.27, samples=3 00:09:53.015 iops : min=18897, max=19890, avg=19312.33, stdev=516.02, samples=3 00:09:53.015 write: IOPS=19.2k, BW=75.2MiB/s (78.8MB/s)(150MiB/2001msec); 0 zone resets 00:09:53.015 slat (nsec): min=4309, max=85524, avg=5723.12, stdev=2932.41 00:09:53.015 clat (usec): min=407, max=32986, avg=3345.17, stdev=1554.71 00:09:53.015 lat (usec): min=412, max=32992, avg=3350.89, stdev=1555.55 00:09:53.015 clat percentiles (usec): 00:09:53.015 | 1.00th=[ 2024], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:53.015 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3097], 00:09:53.015 | 70.00th=[ 3425], 80.00th=[ 3949], 90.00th=[ 4948], 95.00th=[ 5604], 00:09:53.015 | 99.00th=[ 7111], 99.50th=[ 7898], 99.90th=[28181], 99.95th=[29492], 00:09:53.015 | 99.99th=[31327] 00:09:53.015 bw ( KiB/s): min=75823, max=79737, per=100.00%, avg=77373.67, stdev=2079.70, samples=3 00:09:53.015 iops : min=18955, max=19934, avg=19343.00, stdev=520.11, samples=3 00:09:53.015 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:53.015 lat (msec) : 2=0.94%, 4=79.80%, 10=19.04%, 20=0.09%, 50=0.09% 00:09:53.015 cpu : usr=98.80%, sys=0.05%, ctx=6, majf=0, minf=624 00:09:53.015 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:53.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:53.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:53.015 issued rwts: total=38580,38515,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:53.015 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:53.015 00:09:53.015 Run status group 0 (all jobs): 00:09:53.015 READ: bw=75.3MiB/s (79.0MB/s), 75.3MiB/s-75.3MiB/s (79.0MB/s-79.0MB/s), io=151MiB (158MB), run=2001-2001msec 00:09:53.015 WRITE: bw=75.2MiB/s (78.8MB/s), 75.2MiB/s-75.2MiB/s (78.8MB/s-78.8MB/s), io=150MiB (158MB), run=2001-2001msec 00:09:53.015 ----------------------------------------------------- 00:09:53.015 Suppressions used: 00:09:53.015 count bytes template 00:09:53.015 1 32 /usr/src/fio/parse.c 00:09:53.015 1 8 libtcmalloc_minimal.so 00:09:53.015 ----------------------------------------------------- 00:09:53.015 00:09:53.015 21:51:37 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:53.015 21:51:37 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:53.015 21:51:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:53.015 21:51:37 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:53.015 21:51:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:53.015 21:51:37 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:53.015 21:51:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:53.015 21:51:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:53.015 21:51:37 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:53.015 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:53.015 fio-3.35 00:09:53.015 Starting 1 thread 00:09:59.576 00:09:59.576 test: (groupid=0, jobs=1): err= 0: pid=77561: Mon Sep 30 21:51:43 2024 00:09:59.576 read: IOPS=18.0k, BW=70.2MiB/s (73.7MB/s)(143MiB/2030msec) 00:09:59.576 slat (usec): min=4, max=128, avg= 5.67, stdev= 2.91 00:09:59.576 clat (usec): min=1050, max=82982, avg=3503.74, stdev=3546.96 00:09:59.576 lat (usec): min=1065, max=82989, avg=3509.41, stdev=3547.44 00:09:59.576 clat percentiles (usec): 00:09:59.576 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2573], 00:09:59.576 | 30.00th=[ 2671], 40.00th=[ 2802], 50.00th=[ 2966], 60.00th=[ 3163], 00:09:59.577 | 70.00th=[ 3425], 80.00th=[ 3982], 90.00th=[ 5014], 95.00th=[ 5866], 00:09:59.577 | 99.00th=[ 7242], 99.50th=[ 8356], 99.90th=[80217], 99.95th=[81265], 00:09:59.577 | 99.99th=[82314] 00:09:59.577 bw ( KiB/s): min=66960, max=82696, per=100.00%, avg=72944.00, stdev=6871.27, samples=4 00:09:59.577 iops : min=16740, max=20674, avg=18236.00, stdev=1717.82, samples=4 00:09:59.577 write: IOPS=18.0k, BW=70.3MiB/s (73.7MB/s)(143MiB/2030msec); 0 zone resets 00:09:59.577 slat (nsec): min=4294, max=83124, avg=5777.67, stdev=2907.07 00:09:59.577 clat (usec): min=1028, max=124442, avg=3591.65, stdev=4760.29 00:09:59.577 lat (usec): min=1033, max=124449, avg=3597.43, stdev=4760.65 00:09:59.577 clat percentiles (msec): 00:09:59.577 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:09:59.577 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 4], 00:09:59.577 | 70.00th=[ 4], 80.00th=[ 5], 90.00th=[ 6], 95.00th=[ 6], 00:09:59.577 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 123], 99.95th=[ 124], 00:09:59.577 | 99.99th=[ 125] 00:09:59.577 bw ( KiB/s): min=66528, max=82720, per=100.00%, avg=72872.00, stdev=6995.50, samples=4 00:09:59.577 iops : min=16632, max=20680, avg=18218.00, stdev=1748.88, samples=4 00:09:59.577 lat (msec) : 2=0.67%, 4=79.20%, 10=19.79%, 20=0.05%, 50=0.12% 00:09:59.577 lat (msec) : 100=0.12%, 250=0.05% 00:09:59.577 cpu : usr=98.87%, sys=0.05%, ctx=3, majf=0, minf=624 00:09:59.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:59.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:59.577 issued rwts: total=36508,36527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.577 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:59.577 00:09:59.577 Run status group 0 (all jobs): 00:09:59.577 READ: bw=70.2MiB/s (73.7MB/s), 70.2MiB/s-70.2MiB/s (73.7MB/s-73.7MB/s), io=143MiB (150MB), run=2030-2030msec 00:09:59.577 WRITE: bw=70.3MiB/s (73.7MB/s), 70.3MiB/s-70.3MiB/s (73.7MB/s-73.7MB/s), io=143MiB (150MB), run=2030-2030msec 00:09:59.577 ----------------------------------------------------- 00:09:59.577 Suppressions used: 00:09:59.577 count bytes template 00:09:59.577 1 32 /usr/src/fio/parse.c 00:09:59.577 1 8 libtcmalloc_minimal.so 00:09:59.577 ----------------------------------------------------- 00:09:59.577 00:09:59.577 21:51:43 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:59.577 21:51:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:59.577 21:51:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:59.577 21:51:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:59.577 21:51:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:59.577 21:51:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:59.577 21:51:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:59.577 21:51:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:59.577 21:51:43 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:59.577 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:59.577 fio-3.35 00:09:59.577 Starting 1 thread 00:10:07.719 00:10:07.719 test: (groupid=0, jobs=1): err= 0: pid=77622: Mon Sep 30 21:51:51 2024 00:10:07.719 read: IOPS=17.3k, BW=67.6MiB/s (70.9MB/s)(135MiB/2001msec) 00:10:07.719 slat (usec): min=4, max=116, avg= 6.30, stdev= 3.63 00:10:07.719 clat (usec): min=292, max=10898, avg=3679.38, stdev=1294.92 00:10:07.719 lat (usec): min=298, max=10949, avg=3685.68, stdev=1296.44 00:10:07.719 clat percentiles (usec): 00:10:07.719 | 1.00th=[ 2073], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2606], 00:10:07.719 | 30.00th=[ 2737], 40.00th=[ 2933], 50.00th=[ 3195], 60.00th=[ 3687], 00:10:07.719 | 70.00th=[ 4178], 80.00th=[ 4752], 90.00th=[ 5538], 95.00th=[ 6325], 00:10:07.719 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[ 8586], 99.95th=[ 9241], 00:10:07.719 | 99.99th=[10683] 00:10:07.719 bw ( KiB/s): min=62546, max=73504, per=98.11%, avg=67886.00, stdev=5484.29, samples=3 00:10:07.719 iops : min=15636, max=18376, avg=16971.33, stdev=1371.32, samples=3 00:10:07.719 write: IOPS=17.3k, BW=67.6MiB/s (70.9MB/s)(135MiB/2001msec); 0 zone resets 00:10:07.719 slat (nsec): min=4294, max=98026, avg=6554.41, stdev=3793.63 00:10:07.719 clat (usec): min=303, max=10765, avg=3691.37, stdev=1292.76 00:10:07.719 lat (usec): min=308, max=10776, avg=3697.93, stdev=1294.32 00:10:07.719 clat percentiles (usec): 00:10:07.719 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2638], 00:10:07.719 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3195], 60.00th=[ 3687], 00:10:07.719 | 70.00th=[ 4228], 80.00th=[ 4752], 90.00th=[ 5538], 95.00th=[ 6325], 00:10:07.719 | 99.00th=[ 7504], 99.50th=[ 7832], 99.90th=[ 8586], 99.95th=[ 9241], 00:10:07.719 | 99.99th=[10552] 00:10:07.719 bw ( KiB/s): min=62938, max=73136, per=97.79%, avg=67736.67, stdev=5125.47, samples=3 00:10:07.719 iops : min=15734, max=18284, avg=16934.00, stdev=1281.60, samples=3 00:10:07.719 lat (usec) : 500=0.02% 00:10:07.719 lat (msec) : 2=0.68%, 4=65.13%, 10=34.14%, 20=0.03% 00:10:07.719 cpu : usr=98.40%, sys=0.30%, ctx=6, majf=0, minf=624 00:10:07.719 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:07.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.719 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.719 issued rwts: total=34614,34652,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.719 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.719 00:10:07.719 Run status group 0 (all jobs): 00:10:07.719 READ: bw=67.6MiB/s (70.9MB/s), 67.6MiB/s-67.6MiB/s (70.9MB/s-70.9MB/s), io=135MiB (142MB), run=2001-2001msec 00:10:07.719 WRITE: bw=67.6MiB/s (70.9MB/s), 67.6MiB/s-67.6MiB/s (70.9MB/s-70.9MB/s), io=135MiB (142MB), run=2001-2001msec 00:10:07.719 ----------------------------------------------------- 00:10:07.719 Suppressions used: 00:10:07.719 count bytes template 00:10:07.719 1 32 /usr/src/fio/parse.c 00:10:07.719 1 8 libtcmalloc_minimal.so 00:10:07.719 ----------------------------------------------------- 00:10:07.719 00:10:07.719 21:51:51 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:07.719 21:51:51 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:07.719 00:10:07.719 real 0m27.590s 00:10:07.719 user 0m19.759s 00:10:07.719 sys 0m11.861s 00:10:07.719 ************************************ 00:10:07.719 END TEST nvme_fio 00:10:07.719 ************************************ 00:10:07.719 21:51:51 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.719 21:51:51 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:07.719 ************************************ 00:10:07.719 END TEST nvme 00:10:07.719 ************************************ 00:10:07.719 00:10:07.719 real 1m37.103s 00:10:07.719 user 3m37.522s 00:10:07.719 sys 0m23.209s 00:10:07.719 21:51:51 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.719 21:51:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:07.719 21:51:51 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:07.719 21:51:51 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:07.719 21:51:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:07.719 21:51:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.719 21:51:51 -- common/autotest_common.sh@10 -- # set +x 00:10:07.719 ************************************ 00:10:07.719 START TEST nvme_scc 00:10:07.719 ************************************ 00:10:07.719 21:51:51 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:07.719 * Looking for test storage... 00:10:07.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:07.719 21:51:51 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:07.719 21:51:51 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:07.719 21:51:51 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:07.719 21:51:51 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:07.719 21:51:51 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:07.720 21:51:51 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.720 21:51:51 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:07.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.720 --rc genhtml_branch_coverage=1 00:10:07.720 --rc genhtml_function_coverage=1 00:10:07.720 --rc genhtml_legend=1 00:10:07.720 --rc geninfo_all_blocks=1 00:10:07.720 --rc geninfo_unexecuted_blocks=1 00:10:07.720 00:10:07.720 ' 00:10:07.720 21:51:51 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:07.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.720 --rc genhtml_branch_coverage=1 00:10:07.720 --rc genhtml_function_coverage=1 00:10:07.720 --rc genhtml_legend=1 00:10:07.720 --rc geninfo_all_blocks=1 00:10:07.720 --rc geninfo_unexecuted_blocks=1 00:10:07.720 00:10:07.720 ' 00:10:07.720 21:51:51 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:07.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.720 --rc genhtml_branch_coverage=1 00:10:07.720 --rc genhtml_function_coverage=1 00:10:07.720 --rc genhtml_legend=1 00:10:07.720 --rc geninfo_all_blocks=1 00:10:07.720 --rc geninfo_unexecuted_blocks=1 00:10:07.720 00:10:07.720 ' 00:10:07.720 21:51:51 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:07.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.720 --rc genhtml_branch_coverage=1 00:10:07.720 --rc genhtml_function_coverage=1 00:10:07.720 --rc genhtml_legend=1 00:10:07.720 --rc geninfo_all_blocks=1 00:10:07.720 --rc geninfo_unexecuted_blocks=1 00:10:07.720 00:10:07.720 ' 00:10:07.720 21:51:51 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.720 21:51:51 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.720 21:51:51 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.720 21:51:51 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.720 21:51:51 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.720 21:51:51 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:07.720 21:51:51 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:07.720 21:51:51 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:07.720 21:51:51 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:07.720 21:51:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:07.720 21:51:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:07.720 21:51:51 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:07.720 21:51:51 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:07.720 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:07.720 Waiting for block devices as requested 00:10:07.720 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.720 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.978 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.978 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:13.251 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:13.251 21:51:57 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:13.251 21:51:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:13.251 21:51:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:13.251 21:51:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:13.251 21:51:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.251 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.252 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:13.253 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.254 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.255 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:13.256 21:51:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:13.256 21:51:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:13.256 21:51:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:13.256 21:51:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.256 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:13.257 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.258 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.259 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:13.260 21:51:57 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:13.260 21:51:57 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:13.260 21:51:57 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:13.260 21:51:57 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.260 21:51:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:13.261 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:13.262 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.263 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:13.264 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:13.265 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.266 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:57 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.267 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:13.268 21:51:58 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:13.268 21:51:58 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:13.268 21:51:58 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:13.268 21:51:58 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.268 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.269 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:13.270 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:13.271 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.529 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:13.530 21:51:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:13.530 21:51:58 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:13.530 21:51:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:13.530 21:51:58 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:13.530 21:51:58 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:13.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:14.377 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.377 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.377 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.377 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:14.637 21:51:59 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:14.637 21:51:59 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:14.637 21:51:59 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.637 21:51:59 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:14.637 ************************************ 00:10:14.637 START TEST nvme_simple_copy 00:10:14.638 ************************************ 00:10:14.638 21:51:59 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:14.895 Initializing NVMe Controllers 00:10:14.895 Attaching to 0000:00:10.0 00:10:14.895 Controller supports SCC. Attached to 0000:00:10.0 00:10:14.895 Namespace ID: 1 size: 6GB 00:10:14.895 Initialization complete. 00:10:14.895 00:10:14.895 Controller QEMU NVMe Ctrl (12340 ) 00:10:14.895 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:14.895 Namespace Block Size:4096 00:10:14.895 Writing LBAs 0 to 63 with Random Data 00:10:14.895 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:14.895 LBAs matching Written Data: 64 00:10:14.895 00:10:14.895 real 0m0.257s 00:10:14.895 user 0m0.094s 00:10:14.895 sys 0m0.061s 00:10:14.895 21:51:59 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.895 ************************************ 00:10:14.895 END TEST nvme_simple_copy 00:10:14.895 ************************************ 00:10:14.895 21:51:59 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:14.895 ************************************ 00:10:14.895 END TEST nvme_scc 00:10:14.895 ************************************ 00:10:14.895 00:10:14.895 real 0m7.862s 00:10:14.895 user 0m1.072s 00:10:14.895 sys 0m1.416s 00:10:14.895 21:51:59 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:14.895 21:51:59 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:14.895 21:51:59 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:14.895 21:51:59 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:14.895 21:51:59 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:14.895 21:51:59 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:14.895 21:51:59 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:14.896 21:51:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:14.896 21:51:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:14.896 21:51:59 -- common/autotest_common.sh@10 -- # set +x 00:10:14.896 ************************************ 00:10:14.896 START TEST nvme_fdp 00:10:14.896 ************************************ 00:10:14.896 21:51:59 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:10:14.896 * Looking for test storage... 00:10:14.896 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:14.896 21:51:59 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:14.896 21:51:59 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:10:14.896 21:51:59 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:15.155 21:51:59 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.155 21:51:59 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:15.155 21:51:59 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.155 21:51:59 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:15.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.155 --rc genhtml_branch_coverage=1 00:10:15.155 --rc genhtml_function_coverage=1 00:10:15.155 --rc genhtml_legend=1 00:10:15.155 --rc geninfo_all_blocks=1 00:10:15.155 --rc geninfo_unexecuted_blocks=1 00:10:15.155 00:10:15.155 ' 00:10:15.155 21:51:59 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:15.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.155 --rc genhtml_branch_coverage=1 00:10:15.155 --rc genhtml_function_coverage=1 00:10:15.155 --rc genhtml_legend=1 00:10:15.155 --rc geninfo_all_blocks=1 00:10:15.155 --rc geninfo_unexecuted_blocks=1 00:10:15.155 00:10:15.155 ' 00:10:15.155 21:51:59 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:15.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.155 --rc genhtml_branch_coverage=1 00:10:15.155 --rc genhtml_function_coverage=1 00:10:15.155 --rc genhtml_legend=1 00:10:15.155 --rc geninfo_all_blocks=1 00:10:15.156 --rc geninfo_unexecuted_blocks=1 00:10:15.156 00:10:15.156 ' 00:10:15.156 21:51:59 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:15.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.156 --rc genhtml_branch_coverage=1 00:10:15.156 --rc genhtml_function_coverage=1 00:10:15.156 --rc genhtml_legend=1 00:10:15.156 --rc geninfo_all_blocks=1 00:10:15.156 --rc geninfo_unexecuted_blocks=1 00:10:15.156 00:10:15.156 ' 00:10:15.156 21:51:59 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:15.156 21:51:59 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.156 21:51:59 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.156 21:51:59 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.156 21:51:59 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.156 21:51:59 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.156 21:51:59 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.156 21:51:59 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.156 21:51:59 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:15.156 21:51:59 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:15.156 21:51:59 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:15.156 21:51:59 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:15.156 21:51:59 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:15.416 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:15.676 Waiting for block devices as requested 00:10:15.676 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.676 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.676 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.936 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:21.247 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:21.248 21:52:05 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:21.248 21:52:05 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:21.248 21:52:05 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:21.248 21:52:05 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:21.248 21:52:05 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.248 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.249 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:21.250 21:52:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:21.251 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:21.252 21:52:05 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:21.252 21:52:05 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:21.252 21:52:05 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:21.252 21:52:05 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.252 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:21.253 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:21.254 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.255 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:21.256 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:21.257 21:52:05 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:21.257 21:52:05 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:21.257 21:52:05 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:21.257 21:52:05 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.257 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:21.258 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.259 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.260 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:21.261 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.262 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.263 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:21.264 21:52:05 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:21.264 21:52:05 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:21.265 21:52:05 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:21.265 21:52:05 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:21.265 21:52:05 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.265 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.266 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:21.267 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:21.268 21:52:05 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:21.268 21:52:05 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:21.268 21:52:05 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:21.268 21:52:05 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:21.834 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:22.092 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.092 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.092 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.350 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:22.350 21:52:06 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:22.350 21:52:06 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:22.350 21:52:06 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.350 21:52:06 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:22.350 ************************************ 00:10:22.350 START TEST nvme_flexible_data_placement 00:10:22.350 ************************************ 00:10:22.350 21:52:06 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:22.608 Initializing NVMe Controllers 00:10:22.608 Attaching to 0000:00:13.0 00:10:22.608 Controller supports FDP Attached to 0000:00:13.0 00:10:22.608 Namespace ID: 1 Endurance Group ID: 1 00:10:22.608 Initialization complete. 00:10:22.608 00:10:22.608 ================================== 00:10:22.608 == FDP tests for Namespace: #01 == 00:10:22.608 ================================== 00:10:22.608 00:10:22.608 Get Feature: FDP: 00:10:22.608 ================= 00:10:22.608 Enabled: Yes 00:10:22.608 FDP configuration Index: 0 00:10:22.608 00:10:22.608 FDP configurations log page 00:10:22.608 =========================== 00:10:22.608 Number of FDP configurations: 1 00:10:22.608 Version: 0 00:10:22.608 Size: 112 00:10:22.608 FDP Configuration Descriptor: 0 00:10:22.608 Descriptor Size: 96 00:10:22.608 Reclaim Group Identifier format: 2 00:10:22.608 FDP Volatile Write Cache: Not Present 00:10:22.608 FDP Configuration: Valid 00:10:22.608 Vendor Specific Size: 0 00:10:22.608 Number of Reclaim Groups: 2 00:10:22.608 Number of Recalim Unit Handles: 8 00:10:22.608 Max Placement Identifiers: 128 00:10:22.608 Number of Namespaces Suppprted: 256 00:10:22.608 Reclaim unit Nominal Size: 6000000 bytes 00:10:22.608 Estimated Reclaim Unit Time Limit: Not Reported 00:10:22.608 RUH Desc #000: RUH Type: Initially Isolated 00:10:22.608 RUH Desc #001: RUH Type: Initially Isolated 00:10:22.608 RUH Desc #002: RUH Type: Initially Isolated 00:10:22.608 RUH Desc #003: RUH Type: Initially Isolated 00:10:22.608 RUH Desc #004: RUH Type: Initially Isolated 00:10:22.608 RUH Desc #005: RUH Type: Initially Isolated 00:10:22.608 RUH Desc #006: RUH Type: Initially Isolated 00:10:22.608 RUH Desc #007: RUH Type: Initially Isolated 00:10:22.608 00:10:22.608 FDP reclaim unit handle usage log page 00:10:22.608 ====================================== 00:10:22.608 Number of Reclaim Unit Handles: 8 00:10:22.608 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:22.608 RUH Usage Desc #001: RUH Attributes: Unused 00:10:22.608 RUH Usage Desc #002: RUH Attributes: Unused 00:10:22.608 RUH Usage Desc #003: RUH Attributes: Unused 00:10:22.608 RUH Usage Desc #004: RUH Attributes: Unused 00:10:22.608 RUH Usage Desc #005: RUH Attributes: Unused 00:10:22.608 RUH Usage Desc #006: RUH Attributes: Unused 00:10:22.608 RUH Usage Desc #007: RUH Attributes: Unused 00:10:22.608 00:10:22.608 FDP statistics log page 00:10:22.608 ======================= 00:10:22.608 Host bytes with metadata written: 1263390720 00:10:22.608 Media bytes with metadata written: 1264246784 00:10:22.608 Media bytes erased: 0 00:10:22.608 00:10:22.609 FDP Reclaim unit handle status 00:10:22.609 ============================== 00:10:22.609 Number of RUHS descriptors: 2 00:10:22.609 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002b23 00:10:22.609 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:22.609 00:10:22.609 FDP write on placement id: 0 success 00:10:22.609 00:10:22.609 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:22.609 00:10:22.609 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:22.609 00:10:22.609 Get Feature: FDP Events for Placement handle: #0 00:10:22.609 ======================== 00:10:22.609 Number of FDP Events: 6 00:10:22.609 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:22.609 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:22.609 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:22.609 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:22.609 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:22.609 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:22.609 00:10:22.609 FDP events log page 00:10:22.609 =================== 00:10:22.609 Number of FDP events: 1 00:10:22.609 FDP Event #0: 00:10:22.609 Event Type: RU Not Written to Capacity 00:10:22.609 Placement Identifier: Valid 00:10:22.609 NSID: Valid 00:10:22.609 Location: Valid 00:10:22.609 Placement Identifier: 0 00:10:22.609 Event Timestamp: 3 00:10:22.609 Namespace Identifier: 1 00:10:22.609 Reclaim Group Identifier: 0 00:10:22.609 Reclaim Unit Handle Identifier: 0 00:10:22.609 00:10:22.609 FDP test passed 00:10:22.609 00:10:22.609 real 0m0.208s 00:10:22.609 user 0m0.055s 00:10:22.609 sys 0m0.052s 00:10:22.609 21:52:07 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.609 21:52:07 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:22.609 ************************************ 00:10:22.609 END TEST nvme_flexible_data_placement 00:10:22.609 ************************************ 00:10:22.609 ************************************ 00:10:22.609 END TEST nvme_fdp 00:10:22.609 ************************************ 00:10:22.609 00:10:22.609 real 0m7.583s 00:10:22.609 user 0m1.047s 00:10:22.609 sys 0m1.352s 00:10:22.609 21:52:07 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:22.609 21:52:07 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:22.609 21:52:07 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:22.609 21:52:07 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:22.609 21:52:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:22.609 21:52:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:22.609 21:52:07 -- common/autotest_common.sh@10 -- # set +x 00:10:22.609 ************************************ 00:10:22.609 START TEST nvme_rpc 00:10:22.609 ************************************ 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:22.609 * Looking for test storage... 00:10:22.609 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.609 21:52:07 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:22.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.609 --rc genhtml_branch_coverage=1 00:10:22.609 --rc genhtml_function_coverage=1 00:10:22.609 --rc genhtml_legend=1 00:10:22.609 --rc geninfo_all_blocks=1 00:10:22.609 --rc geninfo_unexecuted_blocks=1 00:10:22.609 00:10:22.609 ' 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:22.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.609 --rc genhtml_branch_coverage=1 00:10:22.609 --rc genhtml_function_coverage=1 00:10:22.609 --rc genhtml_legend=1 00:10:22.609 --rc geninfo_all_blocks=1 00:10:22.609 --rc geninfo_unexecuted_blocks=1 00:10:22.609 00:10:22.609 ' 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:22.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.609 --rc genhtml_branch_coverage=1 00:10:22.609 --rc genhtml_function_coverage=1 00:10:22.609 --rc genhtml_legend=1 00:10:22.609 --rc geninfo_all_blocks=1 00:10:22.609 --rc geninfo_unexecuted_blocks=1 00:10:22.609 00:10:22.609 ' 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:22.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.609 --rc genhtml_branch_coverage=1 00:10:22.609 --rc genhtml_function_coverage=1 00:10:22.609 --rc genhtml_legend=1 00:10:22.609 --rc geninfo_all_blocks=1 00:10:22.609 --rc geninfo_unexecuted_blocks=1 00:10:22.609 00:10:22.609 ' 00:10:22.609 21:52:07 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:22.609 21:52:07 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:22.609 21:52:07 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:22.868 21:52:07 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:22.868 21:52:07 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=78980 00:10:22.868 21:52:07 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:22.868 21:52:07 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 78980 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 78980 ']' 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.868 21:52:07 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:22.868 21:52:07 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.868 [2024-09-30 21:52:07.524489] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:10:22.868 [2024-09-30 21:52:07.524615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78980 ] 00:10:22.868 [2024-09-30 21:52:07.652864] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:22.868 [2024-09-30 21:52:07.669885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:23.125 [2024-09-30 21:52:07.704524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.125 [2024-09-30 21:52:07.704539] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.690 21:52:08 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.690 21:52:08 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:23.690 21:52:08 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:23.949 Nvme0n1 00:10:23.949 21:52:08 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:23.949 21:52:08 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:24.207 request: 00:10:24.207 { 00:10:24.207 "bdev_name": "Nvme0n1", 00:10:24.207 "filename": "non_existing_file", 00:10:24.207 "method": "bdev_nvme_apply_firmware", 00:10:24.207 "req_id": 1 00:10:24.207 } 00:10:24.207 Got JSON-RPC error response 00:10:24.207 response: 00:10:24.207 { 00:10:24.207 "code": -32603, 00:10:24.207 "message": "open file failed." 00:10:24.207 } 00:10:24.207 21:52:08 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:24.207 21:52:08 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:24.207 21:52:08 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:24.207 21:52:08 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:24.207 21:52:08 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 78980 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 78980 ']' 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 78980 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78980 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.207 killing process with pid 78980 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78980' 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@969 -- # kill 78980 00:10:24.207 21:52:08 nvme_rpc -- common/autotest_common.sh@974 -- # wait 78980 00:10:24.464 00:10:24.464 real 0m1.979s 00:10:24.464 user 0m3.778s 00:10:24.464 sys 0m0.470s 00:10:24.464 21:52:09 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.464 ************************************ 00:10:24.464 END TEST nvme_rpc 00:10:24.464 ************************************ 00:10:24.464 21:52:09 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.464 21:52:09 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:24.464 21:52:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:24.464 21:52:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:24.464 21:52:09 -- common/autotest_common.sh@10 -- # set +x 00:10:24.464 ************************************ 00:10:24.464 START TEST nvme_rpc_timeouts 00:10:24.464 ************************************ 00:10:24.464 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:24.722 * Looking for test storage... 00:10:24.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.722 21:52:09 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:24.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.722 --rc genhtml_branch_coverage=1 00:10:24.722 --rc genhtml_function_coverage=1 00:10:24.722 --rc genhtml_legend=1 00:10:24.722 --rc geninfo_all_blocks=1 00:10:24.722 --rc geninfo_unexecuted_blocks=1 00:10:24.722 00:10:24.722 ' 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:24.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.722 --rc genhtml_branch_coverage=1 00:10:24.722 --rc genhtml_function_coverage=1 00:10:24.722 --rc genhtml_legend=1 00:10:24.722 --rc geninfo_all_blocks=1 00:10:24.722 --rc geninfo_unexecuted_blocks=1 00:10:24.722 00:10:24.722 ' 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:24.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.722 --rc genhtml_branch_coverage=1 00:10:24.722 --rc genhtml_function_coverage=1 00:10:24.722 --rc genhtml_legend=1 00:10:24.722 --rc geninfo_all_blocks=1 00:10:24.722 --rc geninfo_unexecuted_blocks=1 00:10:24.722 00:10:24.722 ' 00:10:24.722 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:24.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.722 --rc genhtml_branch_coverage=1 00:10:24.722 --rc genhtml_function_coverage=1 00:10:24.722 --rc genhtml_legend=1 00:10:24.722 --rc geninfo_all_blocks=1 00:10:24.722 --rc geninfo_unexecuted_blocks=1 00:10:24.722 00:10:24.722 ' 00:10:24.722 21:52:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.723 21:52:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_79029 00:10:24.723 21:52:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_79029 00:10:24.723 21:52:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=79066 00:10:24.723 21:52:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:24.723 21:52:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 79066 00:10:24.723 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 79066 ']' 00:10:24.723 21:52:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:24.723 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.723 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:24.723 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.723 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:24.723 21:52:09 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:24.723 [2024-09-30 21:52:09.481424] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:10:24.723 [2024-09-30 21:52:09.481526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79066 ] 00:10:24.980 [2024-09-30 21:52:09.605027] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:24.980 [2024-09-30 21:52:09.625615] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:24.980 [2024-09-30 21:52:09.658508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.980 [2024-09-30 21:52:09.658666] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.544 21:52:10 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:25.544 Checking default timeout settings: 00:10:25.544 21:52:10 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:25.544 21:52:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:25.544 21:52:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:26.114 Making settings changes with rpc: 00:10:26.114 21:52:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:26.114 21:52:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:26.114 Check default vs. modified settings: 00:10:26.114 21:52:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:26.114 21:52:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_79029 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_79029 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:26.419 Setting action_on_timeout is changed as expected. 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_79029 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_79029 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:26.419 Setting timeout_us is changed as expected. 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_79029 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_79029 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:26.419 Setting timeout_admin_us is changed as expected. 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_79029 /tmp/settings_modified_79029 00:10:26.419 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 79066 00:10:26.419 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 79066 ']' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 79066 00:10:26.419 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:26.419 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:26.419 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79066 00:10:26.677 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:26.677 killing process with pid 79066 00:10:26.677 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:26.677 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79066' 00:10:26.677 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 79066 00:10:26.677 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 79066 00:10:26.935 RPC TIMEOUT SETTING TEST PASSED. 00:10:26.935 21:52:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:26.935 00:10:26.935 real 0m2.231s 00:10:26.935 user 0m4.509s 00:10:26.935 sys 0m0.441s 00:10:26.935 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.935 ************************************ 00:10:26.935 END TEST nvme_rpc_timeouts 00:10:26.935 ************************************ 00:10:26.935 21:52:11 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:26.935 21:52:11 -- spdk/autotest.sh@239 -- # uname -s 00:10:26.935 21:52:11 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:26.935 21:52:11 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:26.935 21:52:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:26.935 21:52:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.935 21:52:11 -- common/autotest_common.sh@10 -- # set +x 00:10:26.935 ************************************ 00:10:26.935 START TEST sw_hotplug 00:10:26.935 ************************************ 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:26.935 * Looking for test storage... 00:10:26.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.935 21:52:11 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:26.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.935 --rc genhtml_branch_coverage=1 00:10:26.935 --rc genhtml_function_coverage=1 00:10:26.935 --rc genhtml_legend=1 00:10:26.935 --rc geninfo_all_blocks=1 00:10:26.935 --rc geninfo_unexecuted_blocks=1 00:10:26.935 00:10:26.935 ' 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:26.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.935 --rc genhtml_branch_coverage=1 00:10:26.935 --rc genhtml_function_coverage=1 00:10:26.935 --rc genhtml_legend=1 00:10:26.935 --rc geninfo_all_blocks=1 00:10:26.935 --rc geninfo_unexecuted_blocks=1 00:10:26.935 00:10:26.935 ' 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:26.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.935 --rc genhtml_branch_coverage=1 00:10:26.935 --rc genhtml_function_coverage=1 00:10:26.935 --rc genhtml_legend=1 00:10:26.935 --rc geninfo_all_blocks=1 00:10:26.935 --rc geninfo_unexecuted_blocks=1 00:10:26.935 00:10:26.935 ' 00:10:26.935 21:52:11 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:26.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.935 --rc genhtml_branch_coverage=1 00:10:26.935 --rc genhtml_function_coverage=1 00:10:26.935 --rc genhtml_legend=1 00:10:26.935 --rc geninfo_all_blocks=1 00:10:26.935 --rc geninfo_unexecuted_blocks=1 00:10:26.935 00:10:26.935 ' 00:10:26.935 21:52:11 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:27.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.451 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.451 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.451 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.451 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:27.451 21:52:12 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:27.451 21:52:12 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:27.451 21:52:12 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:27.451 21:52:12 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:27.451 21:52:12 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:27.452 21:52:12 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:27.452 21:52:12 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:27.452 21:52:12 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:27.452 21:52:12 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:27.711 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:27.968 Waiting for block devices as requested 00:10:27.968 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.968 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:27.968 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.227 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:33.491 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:33.491 21:52:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:33.491 21:52:17 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:33.491 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:33.491 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:33.491 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:33.750 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:34.008 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.008 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:34.008 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:34.008 21:52:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=79905 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:34.266 21:52:18 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:34.266 21:52:18 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:34.266 21:52:18 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:34.266 21:52:18 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:34.266 21:52:18 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:34.266 21:52:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:34.266 Initializing NVMe Controllers 00:10:34.266 Attaching to 0000:00:10.0 00:10:34.266 Attaching to 0000:00:11.0 00:10:34.266 Attached to 0000:00:10.0 00:10:34.266 Attached to 0000:00:11.0 00:10:34.266 Initialization complete. Starting I/O... 00:10:34.266 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:34.267 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:34.267 00:10:35.637 QEMU NVMe Ctrl (12340 ): 3163 I/Os completed (+3163) 00:10:35.637 QEMU NVMe Ctrl (12341 ): 3342 I/Os completed (+3342) 00:10:35.637 00:10:36.568 QEMU NVMe Ctrl (12340 ): 6819 I/Os completed (+3656) 00:10:36.568 QEMU NVMe Ctrl (12341 ): 7249 I/Os completed (+3907) 00:10:36.568 00:10:37.499 QEMU NVMe Ctrl (12340 ): 10558 I/Os completed (+3739) 00:10:37.499 QEMU NVMe Ctrl (12341 ): 10948 I/Os completed (+3699) 00:10:37.499 00:10:38.429 QEMU NVMe Ctrl (12340 ): 14308 I/Os completed (+3750) 00:10:38.429 QEMU NVMe Ctrl (12341 ): 14695 I/Os completed (+3747) 00:10:38.429 00:10:39.358 QEMU NVMe Ctrl (12340 ): 19039 I/Os completed (+4731) 00:10:39.358 QEMU NVMe Ctrl (12341 ): 19127 I/Os completed (+4432) 00:10:39.358 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.290 [2024-09-30 21:52:24.862835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:40.290 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:40.290 [2024-09-30 21:52:24.864081] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.864147] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.864173] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.864206] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:40.290 [2024-09-30 21:52:24.865563] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.865600] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.865622] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.865645] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.290 [2024-09-30 21:52:24.884877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:40.290 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:40.290 [2024-09-30 21:52:24.885859] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.885908] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.885930] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.885956] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:40.290 [2024-09-30 21:52:24.887252] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.887298] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.887320] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 [2024-09-30 21:52:24.887343] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.290 21:52:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:40.290 21:52:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:40.290 21:52:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.290 21:52:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.290 21:52:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.290 21:52:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:40.290 Attaching to 0000:00:10.0 00:10:40.290 Attached to 0000:00:10.0 00:10:40.290 QEMU NVMe Ctrl (12340 ): 90 I/Os completed (+90) 00:10:40.290 00:10:40.290 21:52:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:40.548 21:52:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.548 21:52:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:40.548 Attaching to 0000:00:11.0 00:10:40.548 Attached to 0000:00:11.0 00:10:41.559 QEMU NVMe Ctrl (12340 ): 4026 I/Os completed (+3936) 00:10:41.559 QEMU NVMe Ctrl (12341 ): 3801 I/Os completed (+3801) 00:10:41.559 00:10:42.490 QEMU NVMe Ctrl (12340 ): 7722 I/Os completed (+3696) 00:10:42.490 QEMU NVMe Ctrl (12341 ): 7535 I/Os completed (+3734) 00:10:42.490 00:10:43.422 QEMU NVMe Ctrl (12340 ): 11566 I/Os completed (+3844) 00:10:43.422 QEMU NVMe Ctrl (12341 ): 11324 I/Os completed (+3789) 00:10:43.422 00:10:44.357 QEMU NVMe Ctrl (12340 ): 15290 I/Os completed (+3724) 00:10:44.357 QEMU NVMe Ctrl (12341 ): 15017 I/Os completed (+3693) 00:10:44.357 00:10:45.292 QEMU NVMe Ctrl (12340 ): 19641 I/Os completed (+4351) 00:10:45.292 QEMU NVMe Ctrl (12341 ): 19285 I/Os completed (+4268) 00:10:45.292 00:10:46.690 QEMU NVMe Ctrl (12340 ): 23640 I/Os completed (+3999) 00:10:46.690 QEMU NVMe Ctrl (12341 ): 23626 I/Os completed (+4341) 00:10:46.690 00:10:47.254 QEMU NVMe Ctrl (12340 ): 28553 I/Os completed (+4913) 00:10:47.254 QEMU NVMe Ctrl (12341 ): 28636 I/Os completed (+5010) 00:10:47.254 00:10:48.628 QEMU NVMe Ctrl (12340 ): 32492 I/Os completed (+3939) 00:10:48.628 QEMU NVMe Ctrl (12341 ): 32809 I/Os completed (+4173) 00:10:48.628 00:10:49.561 QEMU NVMe Ctrl (12340 ): 36380 I/Os completed (+3888) 00:10:49.561 QEMU NVMe Ctrl (12341 ): 36845 I/Os completed (+4036) 00:10:49.561 00:10:50.494 QEMU NVMe Ctrl (12340 ): 40249 I/Os completed (+3869) 00:10:50.494 QEMU NVMe Ctrl (12341 ): 40841 I/Os completed (+3996) 00:10:50.494 00:10:51.427 QEMU NVMe Ctrl (12340 ): 44207 I/Os completed (+3958) 00:10:51.427 QEMU NVMe Ctrl (12341 ): 45056 I/Os completed (+4215) 00:10:51.427 00:10:52.362 QEMU NVMe Ctrl (12340 ): 48009 I/Os completed (+3802) 00:10:52.362 QEMU NVMe Ctrl (12341 ): 48880 I/Os completed (+3824) 00:10:52.362 00:10:52.362 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:52.362 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:52.362 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.362 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.362 [2024-09-30 21:52:37.116866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:52.362 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:52.362 [2024-09-30 21:52:37.117863] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.117901] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.117917] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.117932] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:52.362 [2024-09-30 21:52:37.119198] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.119229] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.119243] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.119254] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:52.362 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:52.362 [2024-09-30 21:52:37.137416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:52.362 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:52.362 [2024-09-30 21:52:37.138315] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.138352] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.138366] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.138381] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:52.362 [2024-09-30 21:52:37.139366] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.139398] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.139410] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 [2024-09-30 21:52:37.139423] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:52.362 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:52.362 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.621 Attaching to 0000:00:10.0 00:10:52.621 Attached to 0000:00:10.0 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.621 21:52:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:52.621 Attaching to 0000:00:11.0 00:10:52.621 Attached to 0000:00:11.0 00:10:53.556 QEMU NVMe Ctrl (12340 ): 2982 I/Os completed (+2982) 00:10:53.556 QEMU NVMe Ctrl (12341 ): 2747 I/Os completed (+2747) 00:10:53.556 00:10:54.489 QEMU NVMe Ctrl (12340 ): 6726 I/Os completed (+3744) 00:10:54.489 QEMU NVMe Ctrl (12341 ): 6468 I/Os completed (+3721) 00:10:54.489 00:10:55.425 QEMU NVMe Ctrl (12340 ): 11735 I/Os completed (+5009) 00:10:55.425 QEMU NVMe Ctrl (12341 ): 12025 I/Os completed (+5557) 00:10:55.425 00:10:56.360 QEMU NVMe Ctrl (12340 ): 15853 I/Os completed (+4118) 00:10:56.360 QEMU NVMe Ctrl (12341 ): 16361 I/Os completed (+4336) 00:10:56.360 00:10:57.293 QEMU NVMe Ctrl (12340 ): 19574 I/Os completed (+3721) 00:10:57.293 QEMU NVMe Ctrl (12341 ): 20426 I/Os completed (+4065) 00:10:57.293 00:10:58.666 QEMU NVMe Ctrl (12340 ): 23178 I/Os completed (+3604) 00:10:58.666 QEMU NVMe Ctrl (12341 ): 24012 I/Os completed (+3586) 00:10:58.666 00:10:59.600 QEMU NVMe Ctrl (12340 ): 27044 I/Os completed (+3866) 00:10:59.600 QEMU NVMe Ctrl (12341 ): 27907 I/Os completed (+3895) 00:10:59.600 00:11:00.534 QEMU NVMe Ctrl (12340 ): 31202 I/Os completed (+4158) 00:11:00.534 QEMU NVMe Ctrl (12341 ): 32065 I/Os completed (+4158) 00:11:00.534 00:11:01.467 QEMU NVMe Ctrl (12340 ): 35349 I/Os completed (+4147) 00:11:01.467 QEMU NVMe Ctrl (12341 ): 36247 I/Os completed (+4182) 00:11:01.467 00:11:02.399 QEMU NVMe Ctrl (12340 ): 39140 I/Os completed (+3791) 00:11:02.399 QEMU NVMe Ctrl (12341 ): 40007 I/Os completed (+3760) 00:11:02.399 00:11:03.370 QEMU NVMe Ctrl (12340 ): 42985 I/Os completed (+3845) 00:11:03.370 QEMU NVMe Ctrl (12341 ): 43825 I/Os completed (+3818) 00:11:03.370 00:11:04.335 QEMU NVMe Ctrl (12340 ): 46571 I/Os completed (+3586) 00:11:04.335 QEMU NVMe Ctrl (12341 ): 47436 I/Os completed (+3611) 00:11:04.335 00:11:04.592 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:04.592 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.592 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.592 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.592 [2024-09-30 21:52:49.368352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:04.592 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:04.592 [2024-09-30 21:52:49.369387] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.369422] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.369440] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.369456] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:04.592 [2024-09-30 21:52:49.370725] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.370750] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.370765] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.370778] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:04.592 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:04.592 [2024-09-30 21:52:49.391119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:04.592 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:04.592 [2024-09-30 21:52:49.392055] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.392095] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.392110] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.392125] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:04.592 [2024-09-30 21:52:49.393173] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.393216] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.393229] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 [2024-09-30 21:52:49.393243] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:04.592 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:04.592 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.592 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:04.592 EAL: Scan for (pci) bus failed. 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.850 Attaching to 0000:00:10.0 00:11:04.850 Attached to 0000:00:10.0 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.850 21:52:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.850 Attaching to 0000:00:11.0 00:11:04.850 Attached to 0000:00:11.0 00:11:04.850 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:04.850 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:04.850 [2024-09-30 21:52:49.648903] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:17.066 21:53:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:17.066 21:53:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:17.066 21:53:01 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.79 00:11:17.066 21:53:01 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.79 00:11:17.066 21:53:01 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:17.066 21:53:01 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.79 00:11:17.066 21:53:01 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.79 2 00:11:17.066 remove_attach_helper took 42.79s to complete (handling 2 nvme drive(s)) 21:53:01 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 79905 00:11:23.631 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (79905) - No such process 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 79905 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=80455 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 80455 00:11:23.631 21:53:07 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 80455 ']' 00:11:23.631 21:53:07 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.631 21:53:07 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.631 21:53:07 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.631 21:53:07 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.631 21:53:07 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.631 21:53:07 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:23.631 [2024-09-30 21:53:07.730516] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:11:23.631 [2024-09-30 21:53:07.730649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80455 ] 00:11:23.631 [2024-09-30 21:53:07.858950] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:23.631 [2024-09-30 21:53:07.878139] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.631 [2024-09-30 21:53:07.913968] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:23.889 21:53:08 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:23.889 21:53:08 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.442 21:53:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.442 21:53:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.442 21:53:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:30.442 21:53:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:30.442 [2024-09-30 21:53:14.668135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:30.442 [2024-09-30 21:53:14.669333] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.442 [2024-09-30 21:53:14.669367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.442 [2024-09-30 21:53:14.669382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.442 [2024-09-30 21:53:14.669397] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.442 [2024-09-30 21:53:14.669405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.442 [2024-09-30 21:53:14.669416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.442 [2024-09-30 21:53:14.669423] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.442 [2024-09-30 21:53:14.669431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.442 [2024-09-30 21:53:14.669438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.442 [2024-09-30 21:53:14.669445] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.442 [2024-09-30 21:53:14.669452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.442 [2024-09-30 21:53:14.669460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.442 [2024-09-30 21:53:15.068146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:30.442 [2024-09-30 21:53:15.069425] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.442 [2024-09-30 21:53:15.069458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.442 [2024-09-30 21:53:15.069470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.442 [2024-09-30 21:53:15.069482] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.442 [2024-09-30 21:53:15.069491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.442 [2024-09-30 21:53:15.069498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.442 [2024-09-30 21:53:15.069507] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.442 [2024-09-30 21:53:15.069513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.442 [2024-09-30 21:53:15.069523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.442 [2024-09-30 21:53:15.069529] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.442 [2024-09-30 21:53:15.069537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.442 [2024-09-30 21:53:15.069544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.442 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:30.442 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:30.442 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:30.442 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:30.442 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:30.442 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:30.443 21:53:15 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.443 21:53:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:30.443 21:53:15 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.443 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:30.443 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:30.443 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.443 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.443 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:30.702 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:30.702 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.702 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.702 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.702 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:30.702 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:30.702 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:30.702 21:53:15 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.901 21:53:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.901 21:53:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.901 21:53:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:42.901 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:42.901 [2024-09-30 21:53:27.469094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:42.901 [2024-09-30 21:53:27.470373] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.901 [2024-09-30 21:53:27.470408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.901 [2024-09-30 21:53:27.470420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.902 [2024-09-30 21:53:27.470435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.902 [2024-09-30 21:53:27.470442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.902 [2024-09-30 21:53:27.470451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.902 [2024-09-30 21:53:27.470458] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.902 [2024-09-30 21:53:27.470466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.902 [2024-09-30 21:53:27.470473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.902 [2024-09-30 21:53:27.470480] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:42.902 [2024-09-30 21:53:27.470486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:42.902 [2024-09-30 21:53:27.470497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:42.902 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:42.902 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:42.902 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:42.902 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:42.902 21:53:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.902 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:42.902 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:42.902 21:53:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:42.902 21:53:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.902 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:42.902 21:53:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:43.159 [2024-09-30 21:53:27.869108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:43.159 [2024-09-30 21:53:27.870293] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.159 [2024-09-30 21:53:27.870324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.160 [2024-09-30 21:53:27.870336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.160 [2024-09-30 21:53:27.870349] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.160 [2024-09-30 21:53:27.870358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.160 [2024-09-30 21:53:27.870365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.160 [2024-09-30 21:53:27.870373] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.160 [2024-09-30 21:53:27.870379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.160 [2024-09-30 21:53:27.870388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.160 [2024-09-30 21:53:27.870394] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.160 [2024-09-30 21:53:27.870402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:43.160 [2024-09-30 21:53:27.870408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:43.417 21:53:28 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.417 21:53:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:43.417 21:53:28 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.417 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:43.675 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:43.675 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.675 21:53:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.898 21:53:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.898 21:53:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.898 21:53:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:55.898 21:53:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.898 21:53:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:55.898 21:53:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:55.898 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:55.898 [2024-09-30 21:53:40.369351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:55.898 [2024-09-30 21:53:40.370689] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.898 [2024-09-30 21:53:40.370789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.898 [2024-09-30 21:53:40.370802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.898 [2024-09-30 21:53:40.370819] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.898 [2024-09-30 21:53:40.370827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.898 [2024-09-30 21:53:40.370836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.898 [2024-09-30 21:53:40.370843] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.898 [2024-09-30 21:53:40.370851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.898 [2024-09-30 21:53:40.370857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:55.898 [2024-09-30 21:53:40.370865] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:55.898 [2024-09-30 21:53:40.370872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:55.898 [2024-09-30 21:53:40.370880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.160 [2024-09-30 21:53:40.769351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:56.160 [2024-09-30 21:53:40.770715] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.160 [2024-09-30 21:53:40.770754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.160 [2024-09-30 21:53:40.770773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.160 [2024-09-30 21:53:40.770787] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.160 [2024-09-30 21:53:40.770799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.160 [2024-09-30 21:53:40.770807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.160 [2024-09-30 21:53:40.770816] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.160 [2024-09-30 21:53:40.770823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.160 [2024-09-30 21:53:40.770831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.160 [2024-09-30 21:53:40.770838] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:56.160 [2024-09-30 21:53:40.770846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:56.160 [2024-09-30 21:53:40.770852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:56.161 21:53:40 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.161 21:53:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:56.161 21:53:40 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.161 21:53:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:56.419 21:53:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:56.419 21:53:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.419 21:53:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:56.419 21:53:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:56.419 21:53:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:56.419 21:53:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:56.419 21:53:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:56.419 21:53:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.60 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.60 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.60 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.60 2 00:12:08.612 remove_attach_helper took 44.60s to complete (handling 2 nvme drive(s)) 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:08.612 21:53:53 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:08.612 21:53:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.179 21:53:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.179 21:53:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.179 21:53:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:15.179 [2024-09-30 21:53:59.301343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:15.179 [2024-09-30 21:53:59.302605] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.179 [2024-09-30 21:53:59.302721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.179 [2024-09-30 21:53:59.302816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.179 [2024-09-30 21:53:59.302852] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.179 [2024-09-30 21:53:59.302971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.179 [2024-09-30 21:53:59.303326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.179 [2024-09-30 21:53:59.303450] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.179 [2024-09-30 21:53:59.303485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.179 [2024-09-30 21:53:59.303551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.179 [2024-09-30 21:53:59.303613] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.179 [2024-09-30 21:53:59.303630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.179 [2024-09-30 21:53:59.303683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.179 21:53:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.179 21:53:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.179 [2024-09-30 21:53:59.801329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:15.179 [2024-09-30 21:53:59.802570] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.179 [2024-09-30 21:53:59.802605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.179 [2024-09-30 21:53:59.802619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.179 [2024-09-30 21:53:59.802633] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.179 [2024-09-30 21:53:59.802643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.179 [2024-09-30 21:53:59.802650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.179 [2024-09-30 21:53:59.802659] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.179 [2024-09-30 21:53:59.802666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.179 [2024-09-30 21:53:59.802676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.179 [2024-09-30 21:53:59.802682] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:15.179 [2024-09-30 21:53:59.802693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:15.179 [2024-09-30 21:53:59.802699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:15.179 21:53:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:15.179 21:53:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.770 21:54:00 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.770 21:54:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.770 21:54:00 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:15.770 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:16.033 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:16.033 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:16.033 21:54:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.233 21:54:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.233 21:54:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.233 21:54:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.233 21:54:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.233 21:54:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.233 21:54:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:28.233 [2024-09-30 21:54:12.701642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:28.233 21:54:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:28.233 [2024-09-30 21:54:12.702721] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.233 [2024-09-30 21:54:12.702752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.233 [2024-09-30 21:54:12.702763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.233 [2024-09-30 21:54:12.702777] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.233 [2024-09-30 21:54:12.702784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.233 [2024-09-30 21:54:12.702796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.233 [2024-09-30 21:54:12.702802] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.233 [2024-09-30 21:54:12.702810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.233 [2024-09-30 21:54:12.702816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.233 [2024-09-30 21:54:12.702823] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.233 [2024-09-30 21:54:12.702829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.233 [2024-09-30 21:54:12.702837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.491 [2024-09-30 21:54:13.101644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:28.492 [2024-09-30 21:54:13.102423] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.492 [2024-09-30 21:54:13.102452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.492 [2024-09-30 21:54:13.102464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.492 [2024-09-30 21:54:13.102476] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.492 [2024-09-30 21:54:13.102484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.492 [2024-09-30 21:54:13.102491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.492 [2024-09-30 21:54:13.102499] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.492 [2024-09-30 21:54:13.102506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.492 [2024-09-30 21:54:13.102514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.492 [2024-09-30 21:54:13.102520] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:28.492 [2024-09-30 21:54:13.102528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:28.492 [2024-09-30 21:54:13.102534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:28.492 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:28.492 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:28.492 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:28.492 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:28.492 21:54:13 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.492 21:54:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:28.492 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:28.492 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:28.492 21:54:13 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.492 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:28.492 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:28.807 21:54:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.021 21:54:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.021 21:54:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.021 21:54:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.021 21:54:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.021 21:54:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.021 21:54:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:41.021 21:54:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:41.021 [2024-09-30 21:54:25.601855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:41.021 [2024-09-30 21:54:25.602862] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.021 [2024-09-30 21:54:25.602898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.021 [2024-09-30 21:54:25.602909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.021 [2024-09-30 21:54:25.602927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.021 [2024-09-30 21:54:25.602936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.021 [2024-09-30 21:54:25.602944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.021 [2024-09-30 21:54:25.602952] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.021 [2024-09-30 21:54:25.602960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.021 [2024-09-30 21:54:25.602966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.021 [2024-09-30 21:54:25.602974] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.021 [2024-09-30 21:54:25.602981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.021 [2024-09-30 21:54:25.602989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.279 [2024-09-30 21:54:26.001855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:41.279 [2024-09-30 21:54:26.002633] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.279 [2024-09-30 21:54:26.002664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.279 [2024-09-30 21:54:26.002676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.279 [2024-09-30 21:54:26.002688] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.279 [2024-09-30 21:54:26.002697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.279 [2024-09-30 21:54:26.002704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.279 [2024-09-30 21:54:26.002714] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.279 [2024-09-30 21:54:26.002720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.279 [2024-09-30 21:54:26.002728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.279 [2024-09-30 21:54:26.002734] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:41.279 [2024-09-30 21:54:26.002742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:41.279 [2024-09-30 21:54:26.002748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:41.279 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:41.279 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:41.279 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:41.279 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:41.279 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:41.279 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:41.279 21:54:26 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.279 21:54:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:41.542 21:54:26 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:41.542 21:54:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.17 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.17 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:12:53.792 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:53.792 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 80455 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 80455 ']' 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 80455 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80455 00:12:53.792 killing process with pid 80455 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80455' 00:12:53.792 21:54:38 sw_hotplug -- common/autotest_common.sh@969 -- # kill 80455 00:12:53.793 21:54:38 sw_hotplug -- common/autotest_common.sh@974 -- # wait 80455 00:12:54.051 21:54:38 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:54.308 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:54.565 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:54.565 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:54.823 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.823 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:54.823 00:12:54.823 real 2m27.934s 00:12:54.823 user 1m48.770s 00:12:54.823 sys 0m17.869s 00:12:54.823 21:54:39 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:54.823 21:54:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.823 ************************************ 00:12:54.823 END TEST sw_hotplug 00:12:54.823 ************************************ 00:12:54.823 21:54:39 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:54.823 21:54:39 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:54.823 21:54:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:54.823 21:54:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:54.823 21:54:39 -- common/autotest_common.sh@10 -- # set +x 00:12:54.823 ************************************ 00:12:54.823 START TEST nvme_xnvme 00:12:54.823 ************************************ 00:12:54.823 21:54:39 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:54.823 * Looking for test storage... 00:12:54.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:54.823 21:54:39 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:54.823 21:54:39 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:54.823 21:54:39 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:55.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.170 --rc genhtml_branch_coverage=1 00:12:55.170 --rc genhtml_function_coverage=1 00:12:55.170 --rc genhtml_legend=1 00:12:55.170 --rc geninfo_all_blocks=1 00:12:55.170 --rc geninfo_unexecuted_blocks=1 00:12:55.170 00:12:55.170 ' 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:55.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.170 --rc genhtml_branch_coverage=1 00:12:55.170 --rc genhtml_function_coverage=1 00:12:55.170 --rc genhtml_legend=1 00:12:55.170 --rc geninfo_all_blocks=1 00:12:55.170 --rc geninfo_unexecuted_blocks=1 00:12:55.170 00:12:55.170 ' 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:55.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.170 --rc genhtml_branch_coverage=1 00:12:55.170 --rc genhtml_function_coverage=1 00:12:55.170 --rc genhtml_legend=1 00:12:55.170 --rc geninfo_all_blocks=1 00:12:55.170 --rc geninfo_unexecuted_blocks=1 00:12:55.170 00:12:55.170 ' 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:55.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.170 --rc genhtml_branch_coverage=1 00:12:55.170 --rc genhtml_function_coverage=1 00:12:55.170 --rc genhtml_legend=1 00:12:55.170 --rc geninfo_all_blocks=1 00:12:55.170 --rc geninfo_unexecuted_blocks=1 00:12:55.170 00:12:55.170 ' 00:12:55.170 21:54:39 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.170 21:54:39 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.170 21:54:39 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.170 21:54:39 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.170 21:54:39 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.170 21:54:39 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:55.170 21:54:39 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.170 21:54:39 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.170 21:54:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:55.170 ************************************ 00:12:55.170 START TEST xnvme_to_malloc_dd_copy 00:12:55.170 ************************************ 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:55.170 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:55.171 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:55.171 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:55.171 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:55.171 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:55.171 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:55.171 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:55.171 21:54:39 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:55.171 { 00:12:55.171 "subsystems": [ 00:12:55.171 { 00:12:55.171 "subsystem": "bdev", 00:12:55.171 "config": [ 00:12:55.171 { 00:12:55.171 "params": { 00:12:55.171 "block_size": 512, 00:12:55.171 "num_blocks": 2097152, 00:12:55.171 "name": "malloc0" 00:12:55.171 }, 00:12:55.171 "method": "bdev_malloc_create" 00:12:55.171 }, 00:12:55.171 { 00:12:55.171 "params": { 00:12:55.171 "io_mechanism": "libaio", 00:12:55.171 "filename": "/dev/nullb0", 00:12:55.171 "name": "null0" 00:12:55.171 }, 00:12:55.171 "method": "bdev_xnvme_create" 00:12:55.171 }, 00:12:55.171 { 00:12:55.171 "method": "bdev_wait_for_examine" 00:12:55.171 } 00:12:55.171 ] 00:12:55.171 } 00:12:55.171 ] 00:12:55.171 } 00:12:55.171 [2024-09-30 21:54:39.742074] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:55.171 [2024-09-30 21:54:39.742183] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81822 ] 00:12:55.171 [2024-09-30 21:54:39.870030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:55.171 [2024-09-30 21:54:39.892026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.171 [2024-09-30 21:54:39.927010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.907  Copying: 228/1024 [MB] (228 MBps) Copying: 442/1024 [MB] (213 MBps) Copying: 732/1024 [MB] (289 MBps) Copying: 1024/1024 [MB] (average 259 MBps) 00:12:59.907 00:12:59.907 21:54:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:59.907 21:54:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:59.907 21:54:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:59.907 21:54:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:59.907 { 00:12:59.908 "subsystems": [ 00:12:59.908 { 00:12:59.908 "subsystem": "bdev", 00:12:59.908 "config": [ 00:12:59.908 { 00:12:59.908 "params": { 00:12:59.908 "block_size": 512, 00:12:59.908 "num_blocks": 2097152, 00:12:59.908 "name": "malloc0" 00:12:59.908 }, 00:12:59.908 "method": "bdev_malloc_create" 00:12:59.908 }, 00:12:59.908 { 00:12:59.908 "params": { 00:12:59.908 "io_mechanism": "libaio", 00:12:59.908 "filename": "/dev/nullb0", 00:12:59.908 "name": "null0" 00:12:59.908 }, 00:12:59.908 "method": "bdev_xnvme_create" 00:12:59.908 }, 00:12:59.908 { 00:12:59.908 "method": "bdev_wait_for_examine" 00:12:59.908 } 00:12:59.908 ] 00:12:59.908 } 00:12:59.908 ] 00:12:59.908 } 00:12:59.908 [2024-09-30 21:54:44.577325] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:12:59.908 [2024-09-30 21:54:44.577442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81882 ] 00:12:59.908 [2024-09-30 21:54:44.704931] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:00.164 [2024-09-30 21:54:44.725722] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.164 [2024-09-30 21:54:44.757999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.910  Copying: 307/1024 [MB] (307 MBps) Copying: 616/1024 [MB] (308 MBps) Copying: 924/1024 [MB] (307 MBps) Copying: 1024/1024 [MB] (average 308 MBps) 00:13:03.910 00:13:03.910 21:54:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:03.910 21:54:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:03.910 21:54:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:03.910 21:54:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:03.910 21:54:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:03.910 21:54:48 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:03.910 { 00:13:03.910 "subsystems": [ 00:13:03.910 { 00:13:03.910 "subsystem": "bdev", 00:13:03.910 "config": [ 00:13:03.910 { 00:13:03.910 "params": { 00:13:03.910 "block_size": 512, 00:13:03.910 "num_blocks": 2097152, 00:13:03.910 "name": "malloc0" 00:13:03.910 }, 00:13:03.910 "method": "bdev_malloc_create" 00:13:03.910 }, 00:13:03.910 { 00:13:03.910 "params": { 00:13:03.910 "io_mechanism": "io_uring", 00:13:03.910 "filename": "/dev/nullb0", 00:13:03.910 "name": "null0" 00:13:03.910 }, 00:13:03.910 "method": "bdev_xnvme_create" 00:13:03.910 }, 00:13:03.910 { 00:13:03.910 "method": "bdev_wait_for_examine" 00:13:03.910 } 00:13:03.910 ] 00:13:03.910 } 00:13:03.910 ] 00:13:03.910 } 00:13:04.167 [2024-09-30 21:54:48.749540] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:04.167 [2024-09-30 21:54:48.749648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81935 ] 00:13:04.167 [2024-09-30 21:54:48.877116] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:04.167 [2024-09-30 21:54:48.899307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.167 [2024-09-30 21:54:48.934022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.610  Copying: 236/1024 [MB] (236 MBps) Copying: 473/1024 [MB] (237 MBps) Copying: 784/1024 [MB] (311 MBps) Copying: 1024/1024 [MB] (average 272 MBps) 00:13:08.610 00:13:08.610 21:54:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:08.610 21:54:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:08.610 21:54:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:08.610 21:54:53 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:08.610 { 00:13:08.610 "subsystems": [ 00:13:08.610 { 00:13:08.610 "subsystem": "bdev", 00:13:08.610 "config": [ 00:13:08.611 { 00:13:08.611 "params": { 00:13:08.611 "block_size": 512, 00:13:08.611 "num_blocks": 2097152, 00:13:08.611 "name": "malloc0" 00:13:08.611 }, 00:13:08.611 "method": "bdev_malloc_create" 00:13:08.611 }, 00:13:08.611 { 00:13:08.611 "params": { 00:13:08.611 "io_mechanism": "io_uring", 00:13:08.611 "filename": "/dev/nullb0", 00:13:08.611 "name": "null0" 00:13:08.611 }, 00:13:08.611 "method": "bdev_xnvme_create" 00:13:08.611 }, 00:13:08.611 { 00:13:08.611 "method": "bdev_wait_for_examine" 00:13:08.611 } 00:13:08.611 ] 00:13:08.611 } 00:13:08.611 ] 00:13:08.611 } 00:13:08.611 [2024-09-30 21:54:53.337450] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:08.611 [2024-09-30 21:54:53.337547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81996 ] 00:13:08.868 [2024-09-30 21:54:53.460094] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:08.868 [2024-09-30 21:54:53.480803] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.868 [2024-09-30 21:54:53.514549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.039  Copying: 241/1024 [MB] (241 MBps) Copying: 482/1024 [MB] (241 MBps) Copying: 779/1024 [MB] (296 MBps) Copying: 1024/1024 [MB] (average 271 MBps) 00:13:13.039 00:13:13.297 21:54:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:13.297 21:54:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:13.297 ************************************ 00:13:13.297 END TEST xnvme_to_malloc_dd_copy 00:13:13.297 ************************************ 00:13:13.297 00:13:13.297 real 0m18.243s 00:13:13.297 user 0m15.222s 00:13:13.297 sys 0m2.498s 00:13:13.297 21:54:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:13.297 21:54:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:13.297 21:54:57 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:13.297 21:54:57 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:13.297 21:54:57 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:13.297 21:54:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.297 ************************************ 00:13:13.297 START TEST xnvme_bdevperf 00:13:13.297 ************************************ 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:13.297 21:54:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.297 { 00:13:13.297 "subsystems": [ 00:13:13.297 { 00:13:13.297 "subsystem": "bdev", 00:13:13.297 "config": [ 00:13:13.297 { 00:13:13.297 "params": { 00:13:13.297 "io_mechanism": "libaio", 00:13:13.297 "filename": "/dev/nullb0", 00:13:13.297 "name": "null0" 00:13:13.298 }, 00:13:13.298 "method": "bdev_xnvme_create" 00:13:13.298 }, 00:13:13.298 { 00:13:13.298 "method": "bdev_wait_for_examine" 00:13:13.298 } 00:13:13.298 ] 00:13:13.298 } 00:13:13.298 ] 00:13:13.298 } 00:13:13.298 [2024-09-30 21:54:58.013688] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:13.298 [2024-09-30 21:54:58.013785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82073 ] 00:13:13.555 [2024-09-30 21:54:58.141815] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:13.555 [2024-09-30 21:54:58.161887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.555 [2024-09-30 21:54:58.193798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.555 Running I/O for 5 seconds... 00:13:18.708 203520.00 IOPS, 795.00 MiB/s 205248.00 IOPS, 801.75 MiB/s 205994.67 IOPS, 804.67 MiB/s 206576.00 IOPS, 806.94 MiB/s 00:13:18.708 Latency(us) 00:13:18.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.708 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:18.708 null0 : 5.00 206211.82 805.51 0.00 0.00 308.20 117.37 2293.76 00:13:18.708 =================================================================================================================== 00:13:18.709 Total : 206211.82 805.51 0.00 0.00 308.20 117.37 2293.76 00:13:18.709 21:55:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:18.709 21:55:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:18.709 21:55:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:18.709 21:55:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:18.709 21:55:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:18.709 21:55:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:18.709 { 00:13:18.709 "subsystems": [ 00:13:18.709 { 00:13:18.709 "subsystem": "bdev", 00:13:18.709 "config": [ 00:13:18.709 { 00:13:18.709 "params": { 00:13:18.709 "io_mechanism": "io_uring", 00:13:18.709 "filename": "/dev/nullb0", 00:13:18.709 "name": "null0" 00:13:18.709 }, 00:13:18.709 "method": "bdev_xnvme_create" 00:13:18.709 }, 00:13:18.709 { 00:13:18.709 "method": "bdev_wait_for_examine" 00:13:18.709 } 00:13:18.709 ] 00:13:18.709 } 00:13:18.709 ] 00:13:18.709 } 00:13:18.709 [2024-09-30 21:55:03.500243] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:18.709 [2024-09-30 21:55:03.500353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82142 ] 00:13:18.966 [2024-09-30 21:55:03.628055] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:18.966 [2024-09-30 21:55:03.646345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.966 [2024-09-30 21:55:03.683923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.966 Running I/O for 5 seconds... 00:13:24.074 185792.00 IOPS, 725.75 MiB/s 181536.00 IOPS, 709.12 MiB/s 179989.33 IOPS, 703.08 MiB/s 192560.00 IOPS, 752.19 MiB/s 00:13:24.074 Latency(us) 00:13:24.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.074 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:24.074 null0 : 5.00 201770.01 788.16 0.00 0.00 314.76 161.48 2066.90 00:13:24.074 =================================================================================================================== 00:13:24.074 Total : 201770.01 788.16 0.00 0.00 314.76 161.48 2066.90 00:13:24.333 21:55:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:24.333 21:55:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:24.333 ************************************ 00:13:24.333 END TEST xnvme_bdevperf 00:13:24.333 ************************************ 00:13:24.333 00:13:24.333 real 0m11.033s 00:13:24.333 user 0m8.599s 00:13:24.333 sys 0m2.187s 00:13:24.333 21:55:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.333 21:55:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:24.333 ************************************ 00:13:24.333 END TEST nvme_xnvme 00:13:24.333 ************************************ 00:13:24.333 00:13:24.333 real 0m29.491s 00:13:24.333 user 0m23.928s 00:13:24.333 sys 0m4.799s 00:13:24.333 21:55:09 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:24.333 21:55:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.333 21:55:09 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:24.333 21:55:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:24.333 21:55:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:24.333 21:55:09 -- common/autotest_common.sh@10 -- # set +x 00:13:24.333 ************************************ 00:13:24.333 START TEST blockdev_xnvme 00:13:24.333 ************************************ 00:13:24.333 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:24.333 * Looking for test storage... 00:13:24.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:24.333 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:24.333 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:24.333 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:24.592 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.592 21:55:09 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:24.592 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.592 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:24.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.592 --rc genhtml_branch_coverage=1 00:13:24.592 --rc genhtml_function_coverage=1 00:13:24.592 --rc genhtml_legend=1 00:13:24.592 --rc geninfo_all_blocks=1 00:13:24.592 --rc geninfo_unexecuted_blocks=1 00:13:24.592 00:13:24.592 ' 00:13:24.592 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:24.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.592 --rc genhtml_branch_coverage=1 00:13:24.592 --rc genhtml_function_coverage=1 00:13:24.592 --rc genhtml_legend=1 00:13:24.592 --rc geninfo_all_blocks=1 00:13:24.593 --rc geninfo_unexecuted_blocks=1 00:13:24.593 00:13:24.593 ' 00:13:24.593 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:24.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.593 --rc genhtml_branch_coverage=1 00:13:24.593 --rc genhtml_function_coverage=1 00:13:24.593 --rc genhtml_legend=1 00:13:24.593 --rc geninfo_all_blocks=1 00:13:24.593 --rc geninfo_unexecuted_blocks=1 00:13:24.593 00:13:24.593 ' 00:13:24.593 21:55:09 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:24.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.593 --rc genhtml_branch_coverage=1 00:13:24.593 --rc genhtml_function_coverage=1 00:13:24.593 --rc genhtml_legend=1 00:13:24.593 --rc geninfo_all_blocks=1 00:13:24.593 --rc geninfo_unexecuted_blocks=1 00:13:24.593 00:13:24.593 ' 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=82278 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 82278 00:13:24.593 21:55:09 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 82278 ']' 00:13:24.593 21:55:09 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.593 21:55:09 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:24.593 21:55:09 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:24.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.593 21:55:09 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.593 21:55:09 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:24.593 21:55:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.593 [2024-09-30 21:55:09.261131] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:24.593 [2024-09-30 21:55:09.261374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82278 ] 00:13:24.593 [2024-09-30 21:55:09.389362] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:24.593 [2024-09-30 21:55:09.401928] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.851 [2024-09-30 21:55:09.441736] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.418 21:55:10 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.418 21:55:10 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:25.418 21:55:10 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:25.418 21:55:10 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:25.418 21:55:10 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:25.418 21:55:10 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:25.418 21:55:10 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:25.676 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.676 Waiting for block devices as requested 00:13:25.933 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.933 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.933 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:25.933 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:31.257 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:31.257 nvme0n1 00:13:31.257 nvme1n1 00:13:31.257 nvme2n1 00:13:31.257 nvme2n2 00:13:31.257 nvme2n3 00:13:31.257 nvme3n1 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.257 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.257 21:55:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eae0e748-754f-4667-bf96-c43b11a9307c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eae0e748-754f-4667-bf96-c43b11a9307c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "54fb38b9-b3db-4856-9355-edbdda85e404"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "54fb38b9-b3db-4856-9355-edbdda85e404",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "aaef66b1-c4d4-4ef3-b8ce-34a9d5934fc3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aaef66b1-c4d4-4ef3-b8ce-34a9d5934fc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "23fdf7e3-fcbb-4e2c-b97f-1629c79d6a62"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23fdf7e3-fcbb-4e2c-b97f-1629c79d6a62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "d25566f3-bd56-4f8e-9a1e-4b84be9cbfc1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d25566f3-bd56-4f8e-9a1e-4b84be9cbfc1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "169b36f7-ca56-4f95-9bb0-9cbdc992f041"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "169b36f7-ca56-4f95-9bb0-9cbdc992f041",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:31.258 21:55:15 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 82278 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 82278 ']' 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 82278 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82278 00:13:31.258 killing process with pid 82278 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82278' 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 82278 00:13:31.258 21:55:15 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 82278 00:13:31.516 21:55:16 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:31.516 21:55:16 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:31.516 21:55:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:31.516 21:55:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:31.516 21:55:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.516 ************************************ 00:13:31.516 START TEST bdev_hello_world 00:13:31.516 ************************************ 00:13:31.516 21:55:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:31.774 [2024-09-30 21:55:16.331059] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:31.774 [2024-09-30 21:55:16.331732] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82620 ] 00:13:31.774 [2024-09-30 21:55:16.460305] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:31.774 [2024-09-30 21:55:16.478560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.774 [2024-09-30 21:55:16.519489] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.031 [2024-09-30 21:55:16.694826] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:32.031 [2024-09-30 21:55:16.694873] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:32.031 [2024-09-30 21:55:16.694888] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:32.031 [2024-09-30 21:55:16.696621] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:32.032 [2024-09-30 21:55:16.696840] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:32.032 [2024-09-30 21:55:16.696864] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:32.032 [2024-09-30 21:55:16.696984] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:32.032 00:13:32.032 [2024-09-30 21:55:16.697002] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:32.289 00:13:32.289 real 0m0.597s 00:13:32.289 user 0m0.321s 00:13:32.289 sys 0m0.167s 00:13:32.289 21:55:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.289 ************************************ 00:13:32.289 21:55:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:32.289 END TEST bdev_hello_world 00:13:32.289 ************************************ 00:13:32.289 21:55:16 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:32.289 21:55:16 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:32.289 21:55:16 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.289 21:55:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:32.289 ************************************ 00:13:32.289 START TEST bdev_bounds 00:13:32.289 ************************************ 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:32.289 Process bdevio pid: 82651 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=82651 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 82651' 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 82651 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 82651 ']' 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:32.289 21:55:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:32.289 [2024-09-30 21:55:16.970824] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:32.290 [2024-09-30 21:55:16.971055] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82651 ] 00:13:32.547 [2024-09-30 21:55:17.100426] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:32.547 [2024-09-30 21:55:17.118343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.547 [2024-09-30 21:55:17.160230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.547 [2024-09-30 21:55:17.160294] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.547 [2024-09-30 21:55:17.160353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.111 21:55:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:33.111 21:55:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:33.111 21:55:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:33.111 I/O targets: 00:13:33.111 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:33.111 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:33.111 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:33.111 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:33.111 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:33.111 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:33.111 00:13:33.111 00:13:33.111 CUnit - A unit testing framework for C - Version 2.1-3 00:13:33.111 http://cunit.sourceforge.net/ 00:13:33.111 00:13:33.111 00:13:33.111 Suite: bdevio tests on: nvme3n1 00:13:33.111 Test: blockdev write read block ...passed 00:13:33.111 Test: blockdev write zeroes read block ...passed 00:13:33.111 Test: blockdev write zeroes read no split ...passed 00:13:33.369 Test: blockdev write zeroes read split ...passed 00:13:33.369 Test: blockdev write zeroes read split partial ...passed 00:13:33.369 Test: blockdev reset ...passed 00:13:33.369 Test: blockdev write read 8 blocks ...passed 00:13:33.369 Test: blockdev write read size > 128k ...passed 00:13:33.369 Test: blockdev write read invalid size ...passed 00:13:33.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.369 Test: blockdev write read max offset ...passed 00:13:33.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.369 Test: blockdev writev readv 8 blocks ...passed 00:13:33.369 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.369 Test: blockdev writev readv block ...passed 00:13:33.369 Test: blockdev writev readv size > 128k ...passed 00:13:33.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.369 Test: blockdev comparev and writev ...passed 00:13:33.369 Test: blockdev nvme passthru rw ...passed 00:13:33.369 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.369 Test: blockdev nvme admin passthru ...passed 00:13:33.369 Test: blockdev copy ...passed 00:13:33.369 Suite: bdevio tests on: nvme2n3 00:13:33.369 Test: blockdev write read block ...passed 00:13:33.369 Test: blockdev write zeroes read block ...passed 00:13:33.369 Test: blockdev write zeroes read no split ...passed 00:13:33.369 Test: blockdev write zeroes read split ...passed 00:13:33.369 Test: blockdev write zeroes read split partial ...passed 00:13:33.369 Test: blockdev reset ...passed 00:13:33.369 Test: blockdev write read 8 blocks ...passed 00:13:33.369 Test: blockdev write read size > 128k ...passed 00:13:33.369 Test: blockdev write read invalid size ...passed 00:13:33.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.369 Test: blockdev write read max offset ...passed 00:13:33.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.369 Test: blockdev writev readv 8 blocks ...passed 00:13:33.369 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.369 Test: blockdev writev readv block ...passed 00:13:33.369 Test: blockdev writev readv size > 128k ...passed 00:13:33.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.369 Test: blockdev comparev and writev ...passed 00:13:33.369 Test: blockdev nvme passthru rw ...passed 00:13:33.369 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.369 Test: blockdev nvme admin passthru ...passed 00:13:33.369 Test: blockdev copy ...passed 00:13:33.369 Suite: bdevio tests on: nvme2n2 00:13:33.369 Test: blockdev write read block ...passed 00:13:33.369 Test: blockdev write zeroes read block ...passed 00:13:33.369 Test: blockdev write zeroes read no split ...passed 00:13:33.369 Test: blockdev write zeroes read split ...passed 00:13:33.369 Test: blockdev write zeroes read split partial ...passed 00:13:33.369 Test: blockdev reset ...passed 00:13:33.369 Test: blockdev write read 8 blocks ...passed 00:13:33.369 Test: blockdev write read size > 128k ...passed 00:13:33.369 Test: blockdev write read invalid size ...passed 00:13:33.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.369 Test: blockdev write read max offset ...passed 00:13:33.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.369 Test: blockdev writev readv 8 blocks ...passed 00:13:33.369 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.369 Test: blockdev writev readv block ...passed 00:13:33.369 Test: blockdev writev readv size > 128k ...passed 00:13:33.369 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.369 Test: blockdev comparev and writev ...passed 00:13:33.369 Test: blockdev nvme passthru rw ...passed 00:13:33.369 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.369 Test: blockdev nvme admin passthru ...passed 00:13:33.369 Test: blockdev copy ...passed 00:13:33.369 Suite: bdevio tests on: nvme2n1 00:13:33.369 Test: blockdev write read block ...passed 00:13:33.369 Test: blockdev write zeroes read block ...passed 00:13:33.369 Test: blockdev write zeroes read no split ...passed 00:13:33.369 Test: blockdev write zeroes read split ...passed 00:13:33.369 Test: blockdev write zeroes read split partial ...passed 00:13:33.369 Test: blockdev reset ...passed 00:13:33.369 Test: blockdev write read 8 blocks ...passed 00:13:33.370 Test: blockdev write read size > 128k ...passed 00:13:33.370 Test: blockdev write read invalid size ...passed 00:13:33.370 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.370 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.370 Test: blockdev write read max offset ...passed 00:13:33.370 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.370 Test: blockdev writev readv 8 blocks ...passed 00:13:33.370 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.370 Test: blockdev writev readv block ...passed 00:13:33.370 Test: blockdev writev readv size > 128k ...passed 00:13:33.370 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.370 Test: blockdev comparev and writev ...passed 00:13:33.370 Test: blockdev nvme passthru rw ...passed 00:13:33.370 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.370 Test: blockdev nvme admin passthru ...passed 00:13:33.370 Test: blockdev copy ...passed 00:13:33.370 Suite: bdevio tests on: nvme1n1 00:13:33.370 Test: blockdev write read block ...passed 00:13:33.370 Test: blockdev write zeroes read block ...passed 00:13:33.370 Test: blockdev write zeroes read no split ...passed 00:13:33.370 Test: blockdev write zeroes read split ...passed 00:13:33.370 Test: blockdev write zeroes read split partial ...passed 00:13:33.370 Test: blockdev reset ...passed 00:13:33.370 Test: blockdev write read 8 blocks ...passed 00:13:33.370 Test: blockdev write read size > 128k ...passed 00:13:33.370 Test: blockdev write read invalid size ...passed 00:13:33.370 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.370 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.370 Test: blockdev write read max offset ...passed 00:13:33.370 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.370 Test: blockdev writev readv 8 blocks ...passed 00:13:33.370 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.370 Test: blockdev writev readv block ...passed 00:13:33.370 Test: blockdev writev readv size > 128k ...passed 00:13:33.370 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.370 Test: blockdev comparev and writev ...passed 00:13:33.370 Test: blockdev nvme passthru rw ...passed 00:13:33.370 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.370 Test: blockdev nvme admin passthru ...passed 00:13:33.370 Test: blockdev copy ...passed 00:13:33.370 Suite: bdevio tests on: nvme0n1 00:13:33.370 Test: blockdev write read block ...passed 00:13:33.370 Test: blockdev write zeroes read block ...passed 00:13:33.370 Test: blockdev write zeroes read no split ...passed 00:13:33.370 Test: blockdev write zeroes read split ...passed 00:13:33.370 Test: blockdev write zeroes read split partial ...passed 00:13:33.370 Test: blockdev reset ...passed 00:13:33.370 Test: blockdev write read 8 blocks ...passed 00:13:33.370 Test: blockdev write read size > 128k ...passed 00:13:33.370 Test: blockdev write read invalid size ...passed 00:13:33.370 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:33.370 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:33.370 Test: blockdev write read max offset ...passed 00:13:33.370 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:33.370 Test: blockdev writev readv 8 blocks ...passed 00:13:33.370 Test: blockdev writev readv 30 x 1block ...passed 00:13:33.370 Test: blockdev writev readv block ...passed 00:13:33.370 Test: blockdev writev readv size > 128k ...passed 00:13:33.370 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:33.370 Test: blockdev comparev and writev ...passed 00:13:33.370 Test: blockdev nvme passthru rw ...passed 00:13:33.370 Test: blockdev nvme passthru vendor specific ...passed 00:13:33.370 Test: blockdev nvme admin passthru ...passed 00:13:33.370 Test: blockdev copy ...passed 00:13:33.370 00:13:33.370 Run Summary: Type Total Ran Passed Failed Inactive 00:13:33.370 suites 6 6 n/a 0 0 00:13:33.370 tests 138 138 138 0 0 00:13:33.370 asserts 780 780 780 0 n/a 00:13:33.370 00:13:33.370 Elapsed time = 0.272 seconds 00:13:33.370 0 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 82651 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 82651 ']' 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 82651 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82651 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82651' 00:13:33.370 killing process with pid 82651 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 82651 00:13:33.370 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 82651 00:13:33.629 21:55:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:33.629 00:13:33.629 real 0m1.345s 00:13:33.629 user 0m3.338s 00:13:33.629 sys 0m0.302s 00:13:33.629 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.629 21:55:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:33.629 ************************************ 00:13:33.629 END TEST bdev_bounds 00:13:33.629 ************************************ 00:13:33.629 21:55:18 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:33.629 21:55:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:33.629 21:55:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.629 21:55:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:33.629 ************************************ 00:13:33.629 START TEST bdev_nbd 00:13:33.629 ************************************ 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=82695 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 82695 /var/tmp/spdk-nbd.sock 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 82695 ']' 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:33.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:33.629 21:55:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:33.629 [2024-09-30 21:55:18.355851] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:33.629 [2024-09-30 21:55:18.356090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.886 [2024-09-30 21:55:18.481800] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:33.886 [2024-09-30 21:55:18.500078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.886 [2024-09-30 21:55:18.540714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.450 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.708 1+0 records in 00:13:34.708 1+0 records out 00:13:34.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294426 s, 13.9 MB/s 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.708 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.965 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.965 1+0 records in 00:13:34.965 1+0 records out 00:13:34.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648241 s, 6.3 MB/s 00:13:34.966 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.966 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.966 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.966 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.966 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.966 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:34.966 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:34.966 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.224 1+0 records in 00:13:35.224 1+0 records out 00:13:35.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384928 s, 10.6 MB/s 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.224 21:55:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.482 1+0 records in 00:13:35.482 1+0 records out 00:13:35.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000396576 s, 10.3 MB/s 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.482 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.740 1+0 records in 00:13:35.740 1+0 records out 00:13:35.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336789 s, 12.2 MB/s 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.740 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:35.997 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:35.997 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.998 1+0 records in 00:13:35.998 1+0 records out 00:13:35.998 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533045 s, 7.7 MB/s 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:35.998 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd0", 00:13:36.256 "bdev_name": "nvme0n1" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd1", 00:13:36.256 "bdev_name": "nvme1n1" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd2", 00:13:36.256 "bdev_name": "nvme2n1" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd3", 00:13:36.256 "bdev_name": "nvme2n2" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd4", 00:13:36.256 "bdev_name": "nvme2n3" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd5", 00:13:36.256 "bdev_name": "nvme3n1" 00:13:36.256 } 00:13:36.256 ]' 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd0", 00:13:36.256 "bdev_name": "nvme0n1" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd1", 00:13:36.256 "bdev_name": "nvme1n1" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd2", 00:13:36.256 "bdev_name": "nvme2n1" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd3", 00:13:36.256 "bdev_name": "nvme2n2" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd4", 00:13:36.256 "bdev_name": "nvme2n3" 00:13:36.256 }, 00:13:36.256 { 00:13:36.256 "nbd_device": "/dev/nbd5", 00:13:36.256 "bdev_name": "nvme3n1" 00:13:36.256 } 00:13:36.256 ]' 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.256 21:55:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.256 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.515 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.773 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.033 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.293 21:55:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.293 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:37.554 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.555 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:37.555 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:37.813 /dev/nbd0 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.813 1+0 records in 00:13:37.813 1+0 records out 00:13:37.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659097 s, 6.2 MB/s 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:37.813 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:38.071 /dev/nbd1 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.071 1+0 records in 00:13:38.071 1+0 records out 00:13:38.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420562 s, 9.7 MB/s 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:38.071 21:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:38.331 /dev/nbd10 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.331 1+0 records in 00:13:38.331 1+0 records out 00:13:38.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043519 s, 9.4 MB/s 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:38.331 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:38.589 /dev/nbd11 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.589 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.589 1+0 records in 00:13:38.589 1+0 records out 00:13:38.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562111 s, 7.3 MB/s 00:13:38.590 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.590 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:38.590 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.590 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.590 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:38.590 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.590 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:38.590 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:38.848 /dev/nbd12 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:38.848 1+0 records in 00:13:38.848 1+0 records out 00:13:38.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366753 s, 11.2 MB/s 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:38.848 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:39.106 /dev/nbd13 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:39.106 1+0 records in 00:13:39.106 1+0 records out 00:13:39.106 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455455 s, 9.0 MB/s 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:39.106 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.107 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd0", 00:13:39.365 "bdev_name": "nvme0n1" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd1", 00:13:39.365 "bdev_name": "nvme1n1" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd10", 00:13:39.365 "bdev_name": "nvme2n1" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd11", 00:13:39.365 "bdev_name": "nvme2n2" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd12", 00:13:39.365 "bdev_name": "nvme2n3" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd13", 00:13:39.365 "bdev_name": "nvme3n1" 00:13:39.365 } 00:13:39.365 ]' 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd0", 00:13:39.365 "bdev_name": "nvme0n1" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd1", 00:13:39.365 "bdev_name": "nvme1n1" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd10", 00:13:39.365 "bdev_name": "nvme2n1" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd11", 00:13:39.365 "bdev_name": "nvme2n2" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd12", 00:13:39.365 "bdev_name": "nvme2n3" 00:13:39.365 }, 00:13:39.365 { 00:13:39.365 "nbd_device": "/dev/nbd13", 00:13:39.365 "bdev_name": "nvme3n1" 00:13:39.365 } 00:13:39.365 ]' 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:39.365 /dev/nbd1 00:13:39.365 /dev/nbd10 00:13:39.365 /dev/nbd11 00:13:39.365 /dev/nbd12 00:13:39.365 /dev/nbd13' 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:39.365 /dev/nbd1 00:13:39.365 /dev/nbd10 00:13:39.365 /dev/nbd11 00:13:39.365 /dev/nbd12 00:13:39.365 /dev/nbd13' 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:39.365 256+0 records in 00:13:39.365 256+0 records out 00:13:39.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119528 s, 87.7 MB/s 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.365 21:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:39.365 256+0 records in 00:13:39.365 256+0 records out 00:13:39.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0707255 s, 14.8 MB/s 00:13:39.365 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.365 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:39.365 256+0 records in 00:13:39.365 256+0 records out 00:13:39.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0765541 s, 13.7 MB/s 00:13:39.365 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.365 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:39.623 256+0 records in 00:13:39.623 256+0 records out 00:13:39.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0662746 s, 15.8 MB/s 00:13:39.623 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.623 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:39.623 256+0 records in 00:13:39.623 256+0 records out 00:13:39.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0588769 s, 17.8 MB/s 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:39.624 256+0 records in 00:13:39.624 256+0 records out 00:13:39.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0616403 s, 17.0 MB/s 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:39.624 256+0 records in 00:13:39.624 256+0 records out 00:13:39.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0711057 s, 14.7 MB/s 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.624 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.884 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.147 21:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.405 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.663 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.920 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:41.178 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:41.435 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:41.435 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:41.435 21:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:41.435 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:41.436 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:41.436 malloc_lvol_verify 00:13:41.436 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:41.693 78c8aed9-e571-421a-8c76-70e493ee96ea 00:13:41.693 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:41.951 fe504c8b-3b20-4323-938b-2e6c5e186d94 00:13:41.951 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:42.208 /dev/nbd0 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:42.208 mke2fs 1.47.0 (5-Feb-2023) 00:13:42.208 Discarding device blocks: 0/4096 done 00:13:42.208 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:42.208 00:13:42.208 Allocating group tables: 0/1 done 00:13:42.208 Writing inode tables: 0/1 done 00:13:42.208 Creating journal (1024 blocks): done 00:13:42.208 Writing superblocks and filesystem accounting information: 0/1 done 00:13:42.208 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.208 21:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 82695 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 82695 ']' 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 82695 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82695 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82695' 00:13:42.466 killing process with pid 82695 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 82695 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 82695 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:42.466 00:13:42.466 real 0m8.981s 00:13:42.466 user 0m13.064s 00:13:42.466 sys 0m3.068s 00:13:42.466 ************************************ 00:13:42.466 END TEST bdev_nbd 00:13:42.466 ************************************ 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:42.466 21:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:42.725 21:55:27 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:42.725 21:55:27 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:42.725 21:55:27 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:42.725 21:55:27 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:42.725 21:55:27 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:42.725 21:55:27 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.725 21:55:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.725 ************************************ 00:13:42.725 START TEST bdev_fio 00:13:42.725 ************************************ 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:42.725 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:42.725 ************************************ 00:13:42.725 START TEST bdev_fio_rw_verify 00:13:42.725 ************************************ 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:42.725 21:55:27 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:42.986 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.986 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.986 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.986 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.986 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.986 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:42.986 fio-3.35 00:13:42.986 Starting 6 threads 00:13:55.238 00:13:55.238 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=83083: Mon Sep 30 21:55:38 2024 00:13:55.238 read: IOPS=31.8k, BW=124MiB/s (130MB/s)(1241MiB/10001msec) 00:13:55.238 slat (usec): min=2, max=1212, avg= 4.73, stdev= 4.70 00:13:55.238 clat (usec): min=56, max=5425, avg=542.50, stdev=354.08 00:13:55.238 lat (usec): min=60, max=5428, avg=547.24, stdev=354.51 00:13:55.239 clat percentiles (usec): 00:13:55.239 | 50.000th=[ 465], 99.000th=[ 1795], 99.900th=[ 2966], 99.990th=[ 4080], 00:13:55.239 | 99.999th=[ 5276] 00:13:55.239 write: IOPS=32.1k, BW=125MiB/s (132MB/s)(1255MiB/10001msec); 0 zone resets 00:13:55.239 slat (usec): min=3, max=2980, avg=26.68, stdev=63.52 00:13:55.239 clat (usec): min=46, max=8946, avg=721.03, stdev=412.71 00:13:55.239 lat (usec): min=70, max=8978, avg=747.71, stdev=419.84 00:13:55.239 clat percentiles (usec): 00:13:55.239 | 50.000th=[ 635], 99.000th=[ 2212], 99.900th=[ 3294], 99.990th=[ 4293], 00:13:55.239 | 99.999th=[ 8848] 00:13:55.239 bw ( KiB/s): min=89618, max=158544, per=100.00%, avg=129470.95, stdev=3488.85, samples=114 00:13:55.239 iops : min=22404, max=39635, avg=32366.95, stdev=872.24, samples=114 00:13:55.239 lat (usec) : 50=0.01%, 100=0.09%, 250=11.42%, 500=31.68%, 750=28.06% 00:13:55.239 lat (usec) : 1000=15.17% 00:13:55.239 lat (msec) : 2=12.52%, 4=1.05%, 10=0.02% 00:13:55.239 cpu : usr=47.45%, sys=32.65%, ctx=8554, majf=0, minf=26661 00:13:55.239 IO depths : 1=11.7%, 2=24.0%, 4=50.9%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:55.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.239 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:55.239 issued rwts: total=317797,321188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:55.239 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:55.239 00:13:55.239 Run status group 0 (all jobs): 00:13:55.239 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=1241MiB (1302MB), run=10001-10001msec 00:13:55.239 WRITE: bw=125MiB/s (132MB/s), 125MiB/s-125MiB/s (132MB/s-132MB/s), io=1255MiB (1316MB), run=10001-10001msec 00:13:55.239 ----------------------------------------------------- 00:13:55.239 Suppressions used: 00:13:55.239 count bytes template 00:13:55.239 6 48 /usr/src/fio/parse.c 00:13:55.239 3156 302976 /usr/src/fio/iolog.c 00:13:55.239 1 8 libtcmalloc_minimal.so 00:13:55.239 1 904 libcrypto.so 00:13:55.239 ----------------------------------------------------- 00:13:55.239 00:13:55.239 00:13:55.239 real 0m11.079s 00:13:55.239 user 0m29.151s 00:13:55.239 sys 0m19.872s 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.239 ************************************ 00:13:55.239 END TEST bdev_fio_rw_verify 00:13:55.239 ************************************ 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eae0e748-754f-4667-bf96-c43b11a9307c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "eae0e748-754f-4667-bf96-c43b11a9307c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "54fb38b9-b3db-4856-9355-edbdda85e404"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "54fb38b9-b3db-4856-9355-edbdda85e404",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "aaef66b1-c4d4-4ef3-b8ce-34a9d5934fc3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aaef66b1-c4d4-4ef3-b8ce-34a9d5934fc3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "23fdf7e3-fcbb-4e2c-b97f-1629c79d6a62"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "23fdf7e3-fcbb-4e2c-b97f-1629c79d6a62",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "d25566f3-bd56-4f8e-9a1e-4b84be9cbfc1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d25566f3-bd56-4f8e-9a1e-4b84be9cbfc1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "169b36f7-ca56-4f95-9bb0-9cbdc992f041"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "169b36f7-ca56-4f95-9bb0-9cbdc992f041",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:55.239 /home/vagrant/spdk_repo/spdk 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:55.239 00:13:55.239 real 0m11.217s 00:13:55.239 user 0m29.222s 00:13:55.239 sys 0m19.941s 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.239 21:55:38 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:55.239 ************************************ 00:13:55.239 END TEST bdev_fio 00:13:55.239 ************************************ 00:13:55.239 21:55:38 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:55.239 21:55:38 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:55.239 21:55:38 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:55.239 21:55:38 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:55.239 21:55:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:55.239 ************************************ 00:13:55.239 START TEST bdev_verify 00:13:55.239 ************************************ 00:13:55.239 21:55:38 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:55.239 [2024-09-30 21:55:38.629775] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:55.239 [2024-09-30 21:55:38.629885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83250 ] 00:13:55.239 [2024-09-30 21:55:38.768030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:55.240 [2024-09-30 21:55:38.785887] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:55.240 [2024-09-30 21:55:38.866980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.240 [2024-09-30 21:55:38.867085] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.240 Running I/O for 5 seconds... 00:13:59.681 24832.00 IOPS, 97.00 MiB/s 23552.00 IOPS, 92.00 MiB/s 23392.00 IOPS, 91.38 MiB/s 23248.00 IOPS, 90.81 MiB/s 23206.40 IOPS, 90.65 MiB/s 00:13:59.681 Latency(us) 00:13:59.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.681 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x0 length 0xa0000 00:13:59.681 nvme0n1 : 5.02 1732.81 6.77 0.00 0.00 73731.24 16535.24 71787.13 00:13:59.681 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0xa0000 length 0xa0000 00:13:59.681 nvme0n1 : 5.05 1596.77 6.24 0.00 0.00 79999.29 17341.83 68560.74 00:13:59.681 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x0 length 0xbd0bd 00:13:59.681 nvme1n1 : 5.06 3264.82 12.75 0.00 0.00 38987.07 4839.58 62511.26 00:13:59.681 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:59.681 nvme1n1 : 5.06 2897.21 11.32 0.00 0.00 43783.31 4612.73 66140.95 00:13:59.681 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x0 length 0x80000 00:13:59.681 nvme2n1 : 5.05 1748.44 6.83 0.00 0.00 72617.63 7612.26 68157.44 00:13:59.681 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x80000 length 0x80000 00:13:59.681 nvme2n1 : 5.07 1616.97 6.32 0.00 0.00 78632.43 5923.45 77030.01 00:13:59.681 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x0 length 0x80000 00:13:59.681 nvme2n2 : 5.06 1744.31 6.81 0.00 0.00 72637.66 6856.07 64527.75 00:13:59.681 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x80000 length 0x80000 00:13:59.681 nvme2n2 : 5.05 1596.07 6.23 0.00 0.00 79500.11 17845.96 71787.13 00:13:59.681 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x0 length 0x80000 00:13:59.681 nvme2n3 : 5.06 1745.55 6.82 0.00 0.00 72439.25 6553.60 64124.46 00:13:59.681 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x80000 length 0x80000 00:13:59.681 nvme2n3 : 5.07 1615.95 6.31 0.00 0.00 78354.16 6175.51 76223.41 00:13:59.681 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x0 length 0x20000 00:13:59.681 nvme3n1 : 5.07 1765.54 6.90 0.00 0.00 71493.63 2898.71 69770.63 00:13:59.681 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.681 Verification LBA range: start 0x20000 length 0x20000 00:13:59.681 nvme3n1 : 5.07 1616.42 6.31 0.00 0.00 78135.66 6755.25 70173.93 00:13:59.681 =================================================================================================================== 00:13:59.681 Total : 22940.86 89.61 0.00 0.00 66385.01 2898.71 77030.01 00:13:59.681 00:13:59.681 real 0m5.812s 00:13:59.681 user 0m9.089s 00:13:59.681 sys 0m1.694s 00:13:59.681 21:55:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.681 21:55:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:59.681 ************************************ 00:13:59.681 END TEST bdev_verify 00:13:59.681 ************************************ 00:13:59.681 21:55:44 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:59.681 21:55:44 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:59.681 21:55:44 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.681 21:55:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.681 ************************************ 00:13:59.681 START TEST bdev_verify_big_io 00:13:59.681 ************************************ 00:13:59.682 21:55:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:59.682 [2024-09-30 21:55:44.480841] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:13:59.682 [2024-09-30 21:55:44.480963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83340 ] 00:13:59.939 [2024-09-30 21:55:44.610238] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:59.939 [2024-09-30 21:55:44.630284] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:59.939 [2024-09-30 21:55:44.675446] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.939 [2024-09-30 21:55:44.675559] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.196 Running I/O for 5 seconds... 00:14:06.936 792.00 IOPS, 49.50 MiB/s 2835.50 IOPS, 177.22 MiB/s 3320.67 IOPS, 207.54 MiB/s 00:14:06.936 Latency(us) 00:14:06.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.936 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x0 length 0xa000 00:14:06.936 nvme0n1 : 5.95 142.51 8.91 0.00 0.00 855899.21 91145.45 1064707.94 00:14:06.936 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0xa000 length 0xa000 00:14:06.936 nvme0n1 : 6.14 83.34 5.21 0.00 0.00 1466638.97 184710.70 1897115.96 00:14:06.936 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x0 length 0xbd0b 00:14:06.936 nvme1n1 : 5.80 157.36 9.83 0.00 0.00 755063.33 19660.80 896935.78 00:14:06.936 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:06.936 nvme1n1 : 6.15 103.98 6.50 0.00 0.00 1108921.19 34885.32 1858399.31 00:14:06.936 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x0 length 0x8000 00:14:06.936 nvme2n1 : 5.87 130.76 8.17 0.00 0.00 886063.52 199229.44 832408.02 00:14:06.936 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x8000 length 0x8000 00:14:06.936 nvme2n1 : 6.17 103.68 6.48 0.00 0.00 1054284.56 25811.10 1380893.93 00:14:06.936 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x0 length 0x8000 00:14:06.936 nvme2n2 : 5.95 173.37 10.84 0.00 0.00 649775.57 56461.78 719484.46 00:14:06.936 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x8000 length 0x8000 00:14:06.936 nvme2n2 : 6.17 93.29 5.83 0.00 0.00 1112038.79 38716.65 1161499.57 00:14:06.936 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x0 length 0x8000 00:14:06.936 nvme2n3 : 5.95 126.29 7.89 0.00 0.00 865492.91 13107.20 1477685.56 00:14:06.936 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x8000 length 0x8000 00:14:06.936 nvme2n3 : 6.37 118.44 7.40 0.00 0.00 837554.35 3327.21 2594015.70 00:14:06.936 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x0 length 0x2000 00:14:06.936 nvme3n1 : 6.04 158.86 9.93 0.00 0.00 675241.43 3327.21 2452054.65 00:14:06.936 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.936 Verification LBA range: start 0x2000 length 0x2000 00:14:06.936 nvme3n1 : 6.63 255.75 15.98 0.00 0.00 369149.71 2734.87 2632732.36 00:14:06.936 =================================================================================================================== 00:14:06.936 Total : 1647.63 102.98 0.00 0.00 802556.55 2734.87 2632732.36 00:14:07.194 00:14:07.194 real 0m7.368s 00:14:07.194 user 0m13.683s 00:14:07.194 sys 0m0.443s 00:14:07.194 ************************************ 00:14:07.194 END TEST bdev_verify_big_io 00:14:07.194 ************************************ 00:14:07.194 21:55:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.194 21:55:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.194 21:55:51 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.194 21:55:51 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:07.194 21:55:51 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.194 21:55:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.194 ************************************ 00:14:07.194 START TEST bdev_write_zeroes 00:14:07.194 ************************************ 00:14:07.194 21:55:51 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.194 [2024-09-30 21:55:51.891282] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:07.194 [2024-09-30 21:55:51.891411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83444 ] 00:14:07.452 [2024-09-30 21:55:52.020423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:07.452 [2024-09-30 21:55:52.041350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.452 [2024-09-30 21:55:52.084546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.710 Running I/O for 1 seconds... 00:14:08.643 81838.00 IOPS, 319.68 MiB/s 00:14:08.643 Latency(us) 00:14:08.643 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.643 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.644 nvme0n1 : 1.02 11634.74 45.45 0.00 0.00 10990.18 7914.73 21072.34 00:14:08.644 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.644 nvme1n1 : 1.02 22943.99 89.62 0.00 0.00 5567.95 3138.17 18148.43 00:14:08.644 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.644 nvme2n1 : 1.02 11687.48 45.65 0.00 0.00 10873.49 7057.72 21475.64 00:14:08.644 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.644 nvme2n2 : 1.02 11674.31 45.60 0.00 0.00 10878.02 7158.55 21778.12 00:14:08.644 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.644 nvme2n3 : 1.02 11661.18 45.55 0.00 0.00 10882.86 7158.55 22080.59 00:14:08.644 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:08.644 nvme3n1 : 1.02 11647.91 45.50 0.00 0.00 10888.14 7158.55 22483.89 00:14:08.644 =================================================================================================================== 00:14:08.644 Total : 81249.60 317.38 0.00 0.00 9399.80 3138.17 22483.89 00:14:08.902 00:14:08.902 real 0m1.683s 00:14:08.902 user 0m0.916s 00:14:08.902 sys 0m0.626s 00:14:08.902 21:55:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:08.902 21:55:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:08.902 ************************************ 00:14:08.902 END TEST bdev_write_zeroes 00:14:08.902 ************************************ 00:14:08.902 21:55:53 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:08.902 21:55:53 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:08.902 21:55:53 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:08.902 21:55:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:08.902 ************************************ 00:14:08.902 START TEST bdev_json_nonenclosed 00:14:08.902 ************************************ 00:14:08.902 21:55:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:08.902 [2024-09-30 21:55:53.613757] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:08.902 [2024-09-30 21:55:53.614066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83486 ] 00:14:09.160 [2024-09-30 21:55:53.743093] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:09.160 [2024-09-30 21:55:53.762013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.160 [2024-09-30 21:55:53.805471] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.160 [2024-09-30 21:55:53.805574] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:09.160 [2024-09-30 21:55:53.805593] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:09.160 [2024-09-30 21:55:53.805603] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:09.160 00:14:09.160 real 0m0.349s 00:14:09.160 user 0m0.146s 00:14:09.160 sys 0m0.099s 00:14:09.160 21:55:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.160 21:55:53 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:09.160 ************************************ 00:14:09.160 END TEST bdev_json_nonenclosed 00:14:09.160 ************************************ 00:14:09.160 21:55:53 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.160 21:55:53 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:09.160 21:55:53 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.160 21:55:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:09.160 ************************************ 00:14:09.160 START TEST bdev_json_nonarray 00:14:09.160 ************************************ 00:14:09.160 21:55:53 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:09.419 [2024-09-30 21:55:54.000721] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:09.419 [2024-09-30 21:55:54.000832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83506 ] 00:14:09.419 [2024-09-30 21:55:54.130294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:09.419 [2024-09-30 21:55:54.150076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.419 [2024-09-30 21:55:54.193694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.419 [2024-09-30 21:55:54.193794] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:09.419 [2024-09-30 21:55:54.193816] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:09.419 [2024-09-30 21:55:54.193826] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:09.677 00:14:09.677 real 0m0.349s 00:14:09.677 user 0m0.150s 00:14:09.677 sys 0m0.095s 00:14:09.677 21:55:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.677 21:55:54 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:09.677 ************************************ 00:14:09.677 END TEST bdev_json_nonarray 00:14:09.677 ************************************ 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:09.677 21:55:54 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:09.936 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:41.998 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.998 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.998 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.998 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.998 00:14:41.998 real 1m16.559s 00:14:41.998 user 1m18.494s 00:14:41.998 sys 1m10.283s 00:14:41.998 21:56:25 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.998 21:56:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.998 ************************************ 00:14:41.998 END TEST blockdev_xnvme 00:14:41.998 ************************************ 00:14:41.998 21:56:25 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:41.998 21:56:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:41.998 21:56:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.998 21:56:25 -- common/autotest_common.sh@10 -- # set +x 00:14:41.998 ************************************ 00:14:41.998 START TEST ublk 00:14:41.998 ************************************ 00:14:41.998 21:56:25 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:41.998 * Looking for test storage... 00:14:41.998 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:41.998 21:56:25 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:41.998 21:56:25 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:41.998 21:56:25 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:41.998 21:56:25 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:41.998 21:56:25 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.998 21:56:25 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.998 21:56:25 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.998 21:56:25 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.998 21:56:25 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.998 21:56:25 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.998 21:56:25 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.998 21:56:25 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.998 21:56:25 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.998 21:56:25 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.998 21:56:25 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.998 21:56:25 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:41.998 21:56:25 ublk -- scripts/common.sh@345 -- # : 1 00:14:41.998 21:56:25 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.998 21:56:25 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.998 21:56:25 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:41.998 21:56:25 ublk -- scripts/common.sh@353 -- # local d=1 00:14:41.998 21:56:25 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.998 21:56:25 ublk -- scripts/common.sh@355 -- # echo 1 00:14:41.998 21:56:25 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.998 21:56:25 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:41.998 21:56:25 ublk -- scripts/common.sh@353 -- # local d=2 00:14:41.998 21:56:25 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.998 21:56:25 ublk -- scripts/common.sh@355 -- # echo 2 00:14:41.998 21:56:25 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.998 21:56:25 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.998 21:56:25 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.998 21:56:25 ublk -- scripts/common.sh@368 -- # return 0 00:14:41.998 21:56:25 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.999 21:56:25 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:41.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.999 --rc genhtml_branch_coverage=1 00:14:41.999 --rc genhtml_function_coverage=1 00:14:41.999 --rc genhtml_legend=1 00:14:41.999 --rc geninfo_all_blocks=1 00:14:41.999 --rc geninfo_unexecuted_blocks=1 00:14:41.999 00:14:41.999 ' 00:14:41.999 21:56:25 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:41.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.999 --rc genhtml_branch_coverage=1 00:14:41.999 --rc genhtml_function_coverage=1 00:14:41.999 --rc genhtml_legend=1 00:14:41.999 --rc geninfo_all_blocks=1 00:14:41.999 --rc geninfo_unexecuted_blocks=1 00:14:41.999 00:14:41.999 ' 00:14:41.999 21:56:25 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:41.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.999 --rc genhtml_branch_coverage=1 00:14:41.999 --rc genhtml_function_coverage=1 00:14:41.999 --rc genhtml_legend=1 00:14:41.999 --rc geninfo_all_blocks=1 00:14:41.999 --rc geninfo_unexecuted_blocks=1 00:14:41.999 00:14:41.999 ' 00:14:41.999 21:56:25 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:41.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.999 --rc genhtml_branch_coverage=1 00:14:41.999 --rc genhtml_function_coverage=1 00:14:41.999 --rc genhtml_legend=1 00:14:41.999 --rc geninfo_all_blocks=1 00:14:41.999 --rc geninfo_unexecuted_blocks=1 00:14:41.999 00:14:41.999 ' 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:41.999 21:56:25 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:41.999 21:56:25 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:41.999 21:56:25 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:41.999 21:56:25 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:41.999 21:56:25 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:41.999 21:56:25 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:41.999 21:56:25 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:41.999 21:56:25 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:41.999 21:56:25 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:41.999 21:56:25 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:41.999 21:56:25 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:41.999 21:56:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:41.999 ************************************ 00:14:41.999 START TEST test_save_ublk_config 00:14:41.999 ************************************ 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=83811 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 83811 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83811 ']' 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:41.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:41.999 21:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:41.999 [2024-09-30 21:56:25.858439] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:41.999 [2024-09-30 21:56:25.858567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83811 ] 00:14:41.999 [2024-09-30 21:56:25.987406] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:41.999 [2024-09-30 21:56:26.006399] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.999 [2024-09-30 21:56:26.061360] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:41.999 [2024-09-30 21:56:26.700214] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:41.999 [2024-09-30 21:56:26.700523] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:41.999 malloc0 00:14:41.999 [2024-09-30 21:56:26.732338] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:41.999 [2024-09-30 21:56:26.732424] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:41.999 [2024-09-30 21:56:26.732437] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:41.999 [2024-09-30 21:56:26.732455] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:41.999 [2024-09-30 21:56:26.741295] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:41.999 [2024-09-30 21:56:26.741317] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:41.999 [2024-09-30 21:56:26.748218] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:41.999 [2024-09-30 21:56:26.748314] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:41.999 [2024-09-30 21:56:26.765220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:41.999 0 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.999 21:56:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:42.258 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.258 21:56:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:42.258 "subsystems": [ 00:14:42.258 { 00:14:42.258 "subsystem": "fsdev", 00:14:42.258 "config": [ 00:14:42.258 { 00:14:42.258 "method": "fsdev_set_opts", 00:14:42.258 "params": { 00:14:42.258 "fsdev_io_pool_size": 65535, 00:14:42.258 "fsdev_io_cache_size": 256 00:14:42.258 } 00:14:42.258 } 00:14:42.258 ] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "keyring", 00:14:42.258 "config": [] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "iobuf", 00:14:42.258 "config": [ 00:14:42.258 { 00:14:42.258 "method": "iobuf_set_options", 00:14:42.258 "params": { 00:14:42.258 "small_pool_count": 8192, 00:14:42.258 "large_pool_count": 1024, 00:14:42.258 "small_bufsize": 8192, 00:14:42.258 "large_bufsize": 135168 00:14:42.258 } 00:14:42.258 } 00:14:42.258 ] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "sock", 00:14:42.258 "config": [ 00:14:42.258 { 00:14:42.258 "method": "sock_set_default_impl", 00:14:42.258 "params": { 00:14:42.258 "impl_name": "posix" 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "sock_impl_set_options", 00:14:42.258 "params": { 00:14:42.258 "impl_name": "ssl", 00:14:42.258 "recv_buf_size": 4096, 00:14:42.258 "send_buf_size": 4096, 00:14:42.258 "enable_recv_pipe": true, 00:14:42.258 "enable_quickack": false, 00:14:42.258 "enable_placement_id": 0, 00:14:42.258 "enable_zerocopy_send_server": true, 00:14:42.258 "enable_zerocopy_send_client": false, 00:14:42.258 "zerocopy_threshold": 0, 00:14:42.258 "tls_version": 0, 00:14:42.258 "enable_ktls": false 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "sock_impl_set_options", 00:14:42.258 "params": { 00:14:42.258 "impl_name": "posix", 00:14:42.258 "recv_buf_size": 2097152, 00:14:42.258 "send_buf_size": 2097152, 00:14:42.258 "enable_recv_pipe": true, 00:14:42.258 "enable_quickack": false, 00:14:42.258 "enable_placement_id": 0, 00:14:42.258 "enable_zerocopy_send_server": true, 00:14:42.258 "enable_zerocopy_send_client": false, 00:14:42.258 "zerocopy_threshold": 0, 00:14:42.258 "tls_version": 0, 00:14:42.258 "enable_ktls": false 00:14:42.258 } 00:14:42.258 } 00:14:42.258 ] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "vmd", 00:14:42.258 "config": [] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "accel", 00:14:42.258 "config": [ 00:14:42.258 { 00:14:42.258 "method": "accel_set_options", 00:14:42.258 "params": { 00:14:42.258 "small_cache_size": 128, 00:14:42.258 "large_cache_size": 16, 00:14:42.258 "task_count": 2048, 00:14:42.258 "sequence_count": 2048, 00:14:42.258 "buf_count": 2048 00:14:42.258 } 00:14:42.258 } 00:14:42.258 ] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "bdev", 00:14:42.258 "config": [ 00:14:42.258 { 00:14:42.258 "method": "bdev_set_options", 00:14:42.258 "params": { 00:14:42.258 "bdev_io_pool_size": 65535, 00:14:42.258 "bdev_io_cache_size": 256, 00:14:42.258 "bdev_auto_examine": true, 00:14:42.258 "iobuf_small_cache_size": 128, 00:14:42.258 "iobuf_large_cache_size": 16 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "bdev_raid_set_options", 00:14:42.258 "params": { 00:14:42.258 "process_window_size_kb": 1024, 00:14:42.258 "process_max_bandwidth_mb_sec": 0 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "bdev_iscsi_set_options", 00:14:42.258 "params": { 00:14:42.258 "timeout_sec": 30 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "bdev_nvme_set_options", 00:14:42.258 "params": { 00:14:42.258 "action_on_timeout": "none", 00:14:42.258 "timeout_us": 0, 00:14:42.258 "timeout_admin_us": 0, 00:14:42.258 "keep_alive_timeout_ms": 10000, 00:14:42.258 "arbitration_burst": 0, 00:14:42.258 "low_priority_weight": 0, 00:14:42.258 "medium_priority_weight": 0, 00:14:42.258 "high_priority_weight": 0, 00:14:42.258 "nvme_adminq_poll_period_us": 10000, 00:14:42.258 "nvme_ioq_poll_period_us": 0, 00:14:42.258 "io_queue_requests": 0, 00:14:42.258 "delay_cmd_submit": true, 00:14:42.258 "transport_retry_count": 4, 00:14:42.258 "bdev_retry_count": 3, 00:14:42.258 "transport_ack_timeout": 0, 00:14:42.258 "ctrlr_loss_timeout_sec": 0, 00:14:42.258 "reconnect_delay_sec": 0, 00:14:42.258 "fast_io_fail_timeout_sec": 0, 00:14:42.258 "disable_auto_failback": false, 00:14:42.258 "generate_uuids": false, 00:14:42.258 "transport_tos": 0, 00:14:42.258 "nvme_error_stat": false, 00:14:42.258 "rdma_srq_size": 0, 00:14:42.258 "io_path_stat": false, 00:14:42.258 "allow_accel_sequence": false, 00:14:42.258 "rdma_max_cq_size": 0, 00:14:42.258 "rdma_cm_event_timeout_ms": 0, 00:14:42.258 "dhchap_digests": [ 00:14:42.258 "sha256", 00:14:42.258 "sha384", 00:14:42.258 "sha512" 00:14:42.258 ], 00:14:42.258 "dhchap_dhgroups": [ 00:14:42.258 "null", 00:14:42.258 "ffdhe2048", 00:14:42.258 "ffdhe3072", 00:14:42.258 "ffdhe4096", 00:14:42.258 "ffdhe6144", 00:14:42.258 "ffdhe8192" 00:14:42.258 ] 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "bdev_nvme_set_hotplug", 00:14:42.258 "params": { 00:14:42.258 "period_us": 100000, 00:14:42.258 "enable": false 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "bdev_malloc_create", 00:14:42.258 "params": { 00:14:42.258 "name": "malloc0", 00:14:42.258 "num_blocks": 8192, 00:14:42.258 "block_size": 4096, 00:14:42.258 "physical_block_size": 4096, 00:14:42.258 "uuid": "9df2fba1-2d7e-45a7-93ff-46a4a83f24c1", 00:14:42.258 "optimal_io_boundary": 0, 00:14:42.258 "md_size": 0, 00:14:42.258 "dif_type": 0, 00:14:42.258 "dif_is_head_of_md": false, 00:14:42.258 "dif_pi_format": 0 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "bdev_wait_for_examine" 00:14:42.258 } 00:14:42.258 ] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "scsi", 00:14:42.258 "config": null 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "scheduler", 00:14:42.258 "config": [ 00:14:42.258 { 00:14:42.258 "method": "framework_set_scheduler", 00:14:42.258 "params": { 00:14:42.258 "name": "static" 00:14:42.258 } 00:14:42.258 } 00:14:42.258 ] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "vhost_scsi", 00:14:42.258 "config": [] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "vhost_blk", 00:14:42.258 "config": [] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "ublk", 00:14:42.258 "config": [ 00:14:42.258 { 00:14:42.258 "method": "ublk_create_target", 00:14:42.258 "params": { 00:14:42.258 "cpumask": "1" 00:14:42.258 } 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "method": "ublk_start_disk", 00:14:42.258 "params": { 00:14:42.258 "bdev_name": "malloc0", 00:14:42.258 "ublk_id": 0, 00:14:42.258 "num_queues": 1, 00:14:42.258 "queue_depth": 128 00:14:42.258 } 00:14:42.258 } 00:14:42.258 ] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "nbd", 00:14:42.258 "config": [] 00:14:42.258 }, 00:14:42.258 { 00:14:42.258 "subsystem": "nvmf", 00:14:42.258 "config": [ 00:14:42.258 { 00:14:42.258 "method": "nvmf_set_config", 00:14:42.258 "params": { 00:14:42.258 "discovery_filter": "match_any", 00:14:42.258 "admin_cmd_passthru": { 00:14:42.259 "identify_ctrlr": false 00:14:42.259 }, 00:14:42.259 "dhchap_digests": [ 00:14:42.259 "sha256", 00:14:42.259 "sha384", 00:14:42.259 "sha512" 00:14:42.259 ], 00:14:42.259 "dhchap_dhgroups": [ 00:14:42.259 "null", 00:14:42.259 "ffdhe2048", 00:14:42.259 "ffdhe3072", 00:14:42.259 "ffdhe4096", 00:14:42.259 "ffdhe6144", 00:14:42.259 "ffdhe8192" 00:14:42.259 ] 00:14:42.259 } 00:14:42.259 }, 00:14:42.259 { 00:14:42.259 "method": "nvmf_set_max_subsystems", 00:14:42.259 "params": { 00:14:42.259 "max_subsystems": 1024 00:14:42.259 } 00:14:42.259 }, 00:14:42.259 { 00:14:42.259 "method": "nvmf_set_crdt", 00:14:42.259 "params": { 00:14:42.259 "crdt1": 0, 00:14:42.259 "crdt2": 0, 00:14:42.259 "crdt3": 0 00:14:42.259 } 00:14:42.259 } 00:14:42.259 ] 00:14:42.259 }, 00:14:42.259 { 00:14:42.259 "subsystem": "iscsi", 00:14:42.259 "config": [ 00:14:42.259 { 00:14:42.259 "method": "iscsi_set_options", 00:14:42.259 "params": { 00:14:42.259 "node_base": "iqn.2016-06.io.spdk", 00:14:42.259 "max_sessions": 128, 00:14:42.259 "max_connections_per_session": 2, 00:14:42.259 "max_queue_depth": 64, 00:14:42.259 "default_time2wait": 2, 00:14:42.259 "default_time2retain": 20, 00:14:42.259 "first_burst_length": 8192, 00:14:42.259 "immediate_data": true, 00:14:42.259 "allow_duplicated_isid": false, 00:14:42.259 "error_recovery_level": 0, 00:14:42.259 "nop_timeout": 60, 00:14:42.259 "nop_in_interval": 30, 00:14:42.259 "disable_chap": false, 00:14:42.259 "require_chap": false, 00:14:42.259 "mutual_chap": false, 00:14:42.259 "chap_group": 0, 00:14:42.259 "max_large_datain_per_connection": 64, 00:14:42.259 "max_r2t_per_connection": 4, 00:14:42.259 "pdu_pool_size": 36864, 00:14:42.259 "immediate_data_pool_size": 16384, 00:14:42.259 "data_out_pool_size": 2048 00:14:42.259 } 00:14:42.259 } 00:14:42.259 ] 00:14:42.259 } 00:14:42.259 ] 00:14:42.259 }' 00:14:42.259 21:56:27 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 83811 00:14:42.259 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83811 ']' 00:14:42.259 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83811 00:14:42.259 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:42.259 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.259 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83811 00:14:42.517 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.517 killing process with pid 83811 00:14:42.517 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.517 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83811' 00:14:42.517 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83811 00:14:42.517 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83811 00:14:42.517 [2024-09-30 21:56:27.324005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:42.775 [2024-09-30 21:56:27.362235] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:42.775 [2024-09-30 21:56:27.362358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:42.775 [2024-09-30 21:56:27.370219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:42.775 [2024-09-30 21:56:27.370270] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:42.775 [2024-09-30 21:56:27.370280] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:42.775 [2024-09-30 21:56:27.370309] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:42.775 [2024-09-30 21:56:27.370452] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=83849 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 83849 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 83849 ']' 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:43.033 21:56:27 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:43.033 "subsystems": [ 00:14:43.033 { 00:14:43.033 "subsystem": "fsdev", 00:14:43.033 "config": [ 00:14:43.033 { 00:14:43.033 "method": "fsdev_set_opts", 00:14:43.033 "params": { 00:14:43.033 "fsdev_io_pool_size": 65535, 00:14:43.033 "fsdev_io_cache_size": 256 00:14:43.033 } 00:14:43.033 } 00:14:43.033 ] 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "subsystem": "keyring", 00:14:43.033 "config": [] 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "subsystem": "iobuf", 00:14:43.033 "config": [ 00:14:43.033 { 00:14:43.033 "method": "iobuf_set_options", 00:14:43.033 "params": { 00:14:43.033 "small_pool_count": 8192, 00:14:43.033 "large_pool_count": 1024, 00:14:43.033 "small_bufsize": 8192, 00:14:43.033 "large_bufsize": 135168 00:14:43.033 } 00:14:43.033 } 00:14:43.033 ] 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "subsystem": "sock", 00:14:43.033 "config": [ 00:14:43.033 { 00:14:43.033 "method": "sock_set_default_impl", 00:14:43.033 "params": { 00:14:43.033 "impl_name": "posix" 00:14:43.033 } 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "method": "sock_impl_set_options", 00:14:43.033 "params": { 00:14:43.033 "impl_name": "ssl", 00:14:43.033 "recv_buf_size": 4096, 00:14:43.033 "send_buf_size": 4096, 00:14:43.033 "enable_recv_pipe": true, 00:14:43.033 "enable_quickack": false, 00:14:43.033 "enable_placement_id": 0, 00:14:43.033 "enable_zerocopy_send_server": true, 00:14:43.033 "enable_zerocopy_send_client": false, 00:14:43.033 "zerocopy_threshold": 0, 00:14:43.033 "tls_version": 0, 00:14:43.033 "enable_ktls": false 00:14:43.033 } 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "method": "sock_impl_set_options", 00:14:43.033 "params": { 00:14:43.033 "impl_name": "posix", 00:14:43.033 "recv_buf_size": 2097152, 00:14:43.033 "send_buf_size": 2097152, 00:14:43.033 "enable_recv_pipe": true, 00:14:43.033 "enable_quickack": false, 00:14:43.033 "enable_placement_id": 0, 00:14:43.033 "enable_zerocopy_send_server": true, 00:14:43.033 "enable_zerocopy_send_client": false, 00:14:43.033 "zerocopy_threshold": 0, 00:14:43.033 "tls_version": 0, 00:14:43.033 "enable_ktls": false 00:14:43.033 } 00:14:43.033 } 00:14:43.033 ] 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "subsystem": "vmd", 00:14:43.033 "config": [] 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "subsystem": "accel", 00:14:43.033 "config": [ 00:14:43.033 { 00:14:43.033 "method": "accel_set_options", 00:14:43.033 "params": { 00:14:43.033 "small_cache_size": 128, 00:14:43.033 "large_cache_size": 16, 00:14:43.033 "task_count": 2048, 00:14:43.033 "sequence_count": 2048, 00:14:43.033 "buf_count": 2048 00:14:43.033 } 00:14:43.033 } 00:14:43.033 ] 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "subsystem": "bdev", 00:14:43.033 "config": [ 00:14:43.033 { 00:14:43.033 "method": "bdev_set_options", 00:14:43.033 "params": { 00:14:43.033 "bdev_io_pool_size": 65535, 00:14:43.033 "bdev_io_cache_size": 256, 00:14:43.033 "bdev_auto_examine": true, 00:14:43.033 "iobuf_small_cache_size": 128, 00:14:43.033 "iobuf_large_cache_size": 16 00:14:43.033 } 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "method": "bdev_raid_set_options", 00:14:43.033 "params": { 00:14:43.033 "process_window_size_kb": 1024, 00:14:43.033 "process_max_bandwidth_mb_sec": 0 00:14:43.033 } 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "method": "bdev_iscsi_set_options", 00:14:43.033 "params": { 00:14:43.033 "timeout_sec": 30 00:14:43.033 } 00:14:43.033 }, 00:14:43.033 { 00:14:43.033 "method": "bdev_nvme_set_options", 00:14:43.033 "params": { 00:14:43.033 "action_on_timeout": "none", 00:14:43.033 "timeout_us": 0, 00:14:43.033 "timeout_admin_us": 0, 00:14:43.033 "keep_alive_timeout_ms": 10000, 00:14:43.033 "arbitration_burst": 0, 00:14:43.033 "low_priority_weight": 0, 00:14:43.034 "medium_priority_weight": 0, 00:14:43.034 "high_priority_weight": 0, 00:14:43.034 "nvme_adminq_poll_period_us": 10000, 00:14:43.034 "nvme_ioq_poll_period_us": 0, 00:14:43.034 "io_queue_requests": 0, 00:14:43.034 "delay_cmd_submit": true, 00:14:43.034 "transport_retry_count": 4, 00:14:43.034 "bdev_retry_count": 3, 00:14:43.034 "transport_ack_timeout": 0, 00:14:43.034 "ctrlr_loss_timeout_sec": 0, 00:14:43.034 "reconnect_delay_sec": 0, 00:14:43.034 "fast_io_fail_timeout_sec": 0, 00:14:43.034 "disable_auto_failback": false, 00:14:43.034 "generate_uuids": false, 00:14:43.034 "transport_tos": 0, 00:14:43.034 "nvme_error_stat": false, 00:14:43.034 "rdma_srq_size": 0, 00:14:43.034 "io_path_stat": false, 00:14:43.034 "allow_accel_sequence": false, 00:14:43.034 "rdma_max_cq_size": 0, 00:14:43.034 "rdma_cm_event_timeout_ms": 0, 00:14:43.034 "dhchap_digests": [ 00:14:43.034 "sha256", 00:14:43.034 "sha384", 00:14:43.034 "sha512" 00:14:43.034 ], 00:14:43.034 "dhchap_dhgroups": [ 00:14:43.034 "null", 00:14:43.034 "ffdhe2048", 00:14:43.034 "ffdhe3072", 00:14:43.034 "ffdhe4096", 00:14:43.034 "ffdhe6144", 00:14:43.034 "ffdhe8192" 00:14:43.034 ] 00:14:43.034 } 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "method": "bdev_nvme_set_hotplug", 00:14:43.034 "params": { 00:14:43.034 "period_us": 100000, 00:14:43.034 "enable": false 00:14:43.034 } 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "method": "bdev_malloc_create", 00:14:43.034 "params": { 00:14:43.034 "name": "malloc0", 00:14:43.034 "num_blocks": 8192, 00:14:43.034 "block_size": 4096, 00:14:43.034 "physical_block_size": 4096, 00:14:43.034 "uuid": "9df2fba1-2d7e-45a7-93ff-46a4a83f24c1", 00:14:43.034 "optimal_io_boundary": 0, 00:14:43.034 "md_size": 0, 00:14:43.034 "dif_type": 0, 00:14:43.034 "dif_is_head_of_md": false, 00:14:43.034 "dif_pi_format": 0 00:14:43.034 } 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "method": "bdev_wait_for_examine" 00:14:43.034 } 00:14:43.034 ] 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "subsystem": "scsi", 00:14:43.034 "config": null 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "subsystem": "scheduler", 00:14:43.034 "config": [ 00:14:43.034 { 00:14:43.034 "method": "framework_set_scheduler", 00:14:43.034 "params": { 00:14:43.034 "name": "static" 00:14:43.034 } 00:14:43.034 } 00:14:43.034 ] 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "subsystem": "vhost_scsi", 00:14:43.034 "config": [] 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "subsystem": "vhost_blk", 00:14:43.034 "config": [] 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "subsystem": "ublk", 00:14:43.034 "config": [ 00:14:43.034 { 00:14:43.034 "method": "ublk_create_target", 00:14:43.034 "params": { 00:14:43.034 "cpumask": "1" 00:14:43.034 } 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "method": "ublk_start_disk", 00:14:43.034 "params": { 00:14:43.034 "bdev_name": "malloc0", 00:14:43.034 "ublk_id": 0, 00:14:43.034 "num_queues": 1, 00:14:43.034 "queue_depth": 128 00:14:43.034 } 00:14:43.034 } 00:14:43.034 ] 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "subsystem": "nbd", 00:14:43.034 "config": [] 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "subsystem": "nvmf", 00:14:43.034 "config": [ 00:14:43.034 { 00:14:43.034 "method": "nvmf_set_config", 00:14:43.034 "params": { 00:14:43.034 "discovery_filter": "match_any", 00:14:43.034 "admin_cmd_passthru": { 00:14:43.034 "identify_ctrlr": false 00:14:43.034 }, 00:14:43.034 "dhchap_digests": [ 00:14:43.034 "sha256", 00:14:43.034 "sha384", 00:14:43.034 "sha512" 00:14:43.034 ], 00:14:43.034 "dhchap_dhgroups": [ 00:14:43.034 "null", 00:14:43.034 "ffdhe2048", 00:14:43.034 "ffdhe3072", 00:14:43.034 "ffdhe4096", 00:14:43.034 "ffdhe6144", 00:14:43.034 "ffdhe8192" 00:14:43.034 ] 00:14:43.034 } 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "method": "nvmf_set_max_subsystems", 00:14:43.034 "params": { 00:14:43.034 "max_subsystems": 1024 00:14:43.034 } 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "method": "nvmf_set_crdt", 00:14:43.034 "params": { 00:14:43.034 "crdt1": 0, 00:14:43.034 "crdt2": 0, 00:14:43.034 "crdt3": 0 00:14:43.034 } 00:14:43.034 } 00:14:43.034 ] 00:14:43.034 }, 00:14:43.034 { 00:14:43.034 "subsystem": "iscsi", 00:14:43.034 "config": [ 00:14:43.034 { 00:14:43.034 "method": "iscsi_set_options", 00:14:43.034 "params": { 00:14:43.034 "node_base": "iqn.2016-06.io.spdk", 00:14:43.034 "max_sessions": 128, 00:14:43.034 "max_connections_per_session": 2, 00:14:43.034 "max_queue_depth": 64, 00:14:43.034 "default_time2wait": 2, 00:14:43.034 "default_time2retain": 20, 00:14:43.034 "first_burst_length": 8192, 00:14:43.034 "immediate_data": true, 00:14:43.034 "allow_duplicated_isid": false, 00:14:43.034 "error_recovery_level": 0, 00:14:43.034 "nop_timeout": 60, 00:14:43.034 "nop_in_interval": 30, 00:14:43.034 "disable_chap": false, 00:14:43.034 "require_chap": false, 00:14:43.034 "mutual_chap": false, 00:14:43.034 "chap_group": 0, 00:14:43.034 "max_large_datain_per_connection": 64, 00:14:43.034 "max_r2t_per_connection": 4, 00:14:43.034 "pdu_pool_size": 36864, 00:14:43.034 "immediate_data_pool_size": 16384, 00:14:43.034 "data_out_pool_size": 2048 00:14:43.034 } 00:14:43.034 } 00:14:43.034 ] 00:14:43.034 } 00:14:43.034 ] 00:14:43.034 }' 00:14:43.292 [2024-09-30 21:56:27.859595] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:43.292 [2024-09-30 21:56:27.859703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83849 ] 00:14:43.292 [2024-09-30 21:56:27.983555] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:43.292 [2024-09-30 21:56:28.002262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.292 [2024-09-30 21:56:28.055744] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.857 [2024-09-30 21:56:28.400207] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:43.857 [2024-09-30 21:56:28.400517] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:43.857 [2024-09-30 21:56:28.408327] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:43.857 [2024-09-30 21:56:28.408408] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:43.857 [2024-09-30 21:56:28.408416] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:43.857 [2024-09-30 21:56:28.408431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:43.857 [2024-09-30 21:56:28.417288] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:43.857 [2024-09-30 21:56:28.417310] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:43.857 [2024-09-30 21:56:28.424218] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:43.857 [2024-09-30 21:56:28.424308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:43.857 [2024-09-30 21:56:28.441212] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 83849 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 83849 ']' 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 83849 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.857 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83849 00:14:44.115 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:44.115 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:44.115 killing process with pid 83849 00:14:44.115 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83849' 00:14:44.115 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 83849 00:14:44.115 21:56:28 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 83849 00:14:44.115 [2024-09-30 21:56:28.922680] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:44.376 [2024-09-30 21:56:28.959300] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:44.376 [2024-09-30 21:56:28.959430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:44.376 [2024-09-30 21:56:28.967231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:44.376 [2024-09-30 21:56:28.967285] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:44.376 [2024-09-30 21:56:28.967298] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:44.376 [2024-09-30 21:56:28.967328] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:44.376 [2024-09-30 21:56:28.967466] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:44.635 21:56:29 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:44.635 00:14:44.635 real 0m3.609s 00:14:44.635 user 0m2.434s 00:14:44.635 sys 0m1.714s 00:14:44.635 21:56:29 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:44.635 21:56:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:44.635 ************************************ 00:14:44.635 END TEST test_save_ublk_config 00:14:44.635 ************************************ 00:14:44.635 21:56:29 ublk -- ublk/ublk.sh@139 -- # spdk_pid=83894 00:14:44.635 21:56:29 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:44.635 21:56:29 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.635 21:56:29 ublk -- ublk/ublk.sh@141 -- # waitforlisten 83894 00:14:44.635 21:56:29 ublk -- common/autotest_common.sh@831 -- # '[' -z 83894 ']' 00:14:44.635 21:56:29 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.635 21:56:29 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.635 21:56:29 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.635 21:56:29 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.635 21:56:29 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:44.893 [2024-09-30 21:56:29.509270] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:44.893 [2024-09-30 21:56:29.509417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83894 ] 00:14:44.893 [2024-09-30 21:56:29.646134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:44.893 [2024-09-30 21:56:29.666669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:45.149 [2024-09-30 21:56:29.710055] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.149 [2024-09-30 21:56:29.710151] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.713 21:56:30 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.713 21:56:30 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:45.713 21:56:30 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:45.713 21:56:30 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:45.713 21:56:30 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.713 21:56:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.713 ************************************ 00:14:45.713 START TEST test_create_ublk 00:14:45.713 ************************************ 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.713 [2024-09-30 21:56:30.357212] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:45.713 [2024-09-30 21:56:30.358560] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.713 [2024-09-30 21:56:30.429363] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:45.713 [2024-09-30 21:56:30.429763] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:45.713 [2024-09-30 21:56:30.429779] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:45.713 [2024-09-30 21:56:30.429787] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:45.713 [2024-09-30 21:56:30.441214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:45.713 [2024-09-30 21:56:30.441236] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:45.713 [2024-09-30 21:56:30.452225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:45.713 [2024-09-30 21:56:30.452879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:45.713 [2024-09-30 21:56:30.471220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.713 21:56:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:45.713 { 00:14:45.713 "ublk_device": "/dev/ublkb0", 00:14:45.713 "id": 0, 00:14:45.713 "queue_depth": 512, 00:14:45.713 "num_queues": 4, 00:14:45.713 "bdev_name": "Malloc0" 00:14:45.713 } 00:14:45.713 ]' 00:14:45.713 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:45.971 21:56:30 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:45.971 21:56:30 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:46.228 fio: verification read phase will never start because write phase uses all of runtime 00:14:46.228 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:46.228 fio-3.35 00:14:46.228 Starting 1 process 00:14:56.267 00:14:56.267 fio_test: (groupid=0, jobs=1): err= 0: pid=83939: Mon Sep 30 21:56:40 2024 00:14:56.267 write: IOPS=6222, BW=24.3MiB/s (25.5MB/s)(243MiB/10001msec); 0 zone resets 00:14:56.267 clat (usec): min=34, max=831033, avg=159.86, stdev=6796.81 00:14:56.267 lat (usec): min=34, max=831040, avg=160.34, stdev=6796.92 00:14:56.267 clat percentiles (usec): 00:14:56.267 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:14:56.267 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:14:56.267 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 69], 95.00th=[ 75], 00:14:56.267 | 99.00th=[ 87], 99.50th=[ 111], 99.90th=[ 2040], 99.95th=[ 2999], 00:14:56.267 | 99.99th=[320865] 00:14:56.267 bw ( KiB/s): min= 8, max=66376, per=100.00%, avg=25621.65, stdev=31790.95, samples=17 00:14:56.267 iops : min= 2, max=16594, avg=6405.41, stdev=7947.74, samples=17 00:14:56.267 lat (usec) : 50=4.02%, 100=95.40%, 250=0.33%, 500=0.08%, 750=0.01% 00:14:56.267 lat (usec) : 1000=0.01% 00:14:56.267 lat (msec) : 2=0.04%, 4=0.07%, 10=0.01%, 100=0.01%, 250=0.01% 00:14:56.267 lat (msec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:14:56.267 cpu : usr=1.01%, sys=5.18%, ctx=62225, majf=0, minf=797 00:14:56.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:56.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:56.267 issued rwts: total=0,62229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:56.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:56.267 00:14:56.267 Run status group 0 (all jobs): 00:14:56.267 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=243MiB (255MB), run=10001-10001msec 00:14:56.267 00:14:56.267 Disk stats (read/write): 00:14:56.267 ublkb0: ios=0/60561, merge=0/0, ticks=0/9303, in_queue=9304, util=99.06% 00:14:56.267 21:56:40 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.267 [2024-09-30 21:56:40.921793] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:56.267 [2024-09-30 21:56:40.962247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:56.267 [2024-09-30 21:56:40.962929] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:56.267 [2024-09-30 21:56:40.972207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:56.267 [2024-09-30 21:56:40.972492] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:56.267 [2024-09-30 21:56:40.972508] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.267 21:56:40 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.267 [2024-09-30 21:56:40.980289] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:56.267 request: 00:14:56.267 { 00:14:56.267 "ublk_id": 0, 00:14:56.267 "method": "ublk_stop_disk", 00:14:56.267 "req_id": 1 00:14:56.267 } 00:14:56.267 Got JSON-RPC error response 00:14:56.267 response: 00:14:56.267 { 00:14:56.267 "code": -19, 00:14:56.267 "message": "No such device" 00:14:56.267 } 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.267 21:56:40 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.267 [2024-09-30 21:56:40.991287] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:56.267 [2024-09-30 21:56:40.996757] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:56.267 [2024-09-30 21:56:40.996789] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.267 21:56:40 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.267 21:56:40 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.267 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.267 21:56:41 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:56.267 21:56:41 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:56.267 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.267 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.526 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.526 21:56:41 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:56.526 21:56:41 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:56.526 21:56:41 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:56.526 21:56:41 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:56.526 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.526 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.526 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.526 21:56:41 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:56.526 21:56:41 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:56.526 21:56:41 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:56.526 00:14:56.526 real 0m10.813s 00:14:56.526 user 0m0.426s 00:14:56.526 sys 0m0.601s 00:14:56.526 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.526 21:56:41 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.526 ************************************ 00:14:56.526 END TEST test_create_ublk 00:14:56.526 ************************************ 00:14:56.526 21:56:41 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:56.526 21:56:41 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:56.526 21:56:41 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.526 21:56:41 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.526 ************************************ 00:14:56.526 START TEST test_create_multi_ublk 00:14:56.526 ************************************ 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.526 [2024-09-30 21:56:41.220199] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:56.526 [2024-09-30 21:56:41.221382] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.526 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.526 [2024-09-30 21:56:41.328331] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:56.526 [2024-09-30 21:56:41.328667] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:56.526 [2024-09-30 21:56:41.328679] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:56.526 [2024-09-30 21:56:41.328695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.786 [2024-09-30 21:56:41.340272] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.786 [2024-09-30 21:56:41.340294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.786 [2024-09-30 21:56:41.352217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.786 [2024-09-30 21:56:41.352750] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:56.786 [2024-09-30 21:56:41.366211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.786 [2024-09-30 21:56:41.462316] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:56.786 [2024-09-30 21:56:41.462636] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:56.786 [2024-09-30 21:56:41.462649] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:56.786 [2024-09-30 21:56:41.462655] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.786 [2024-09-30 21:56:41.474253] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.786 [2024-09-30 21:56:41.474269] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.786 [2024-09-30 21:56:41.486217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.786 [2024-09-30 21:56:41.486742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:56.786 [2024-09-30 21:56:41.489636] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.786 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:56.786 [2024-09-30 21:56:41.590299] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:56.786 [2024-09-30 21:56:41.590625] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:56.786 [2024-09-30 21:56:41.590637] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:56.786 [2024-09-30 21:56:41.590644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:57.045 [2024-09-30 21:56:41.602221] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:57.045 [2024-09-30 21:56:41.602240] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:57.045 [2024-09-30 21:56:41.614219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:57.045 [2024-09-30 21:56:41.614738] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:57.045 [2024-09-30 21:56:41.654211] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.045 [2024-09-30 21:56:41.762312] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:57.045 [2024-09-30 21:56:41.762627] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:57.045 [2024-09-30 21:56:41.762641] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:57.045 [2024-09-30 21:56:41.762646] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:57.045 [2024-09-30 21:56:41.774223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:57.045 [2024-09-30 21:56:41.774239] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:57.045 [2024-09-30 21:56:41.786214] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:57.045 [2024-09-30 21:56:41.786727] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:57.045 [2024-09-30 21:56:41.789682] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:57.045 { 00:14:57.045 "ublk_device": "/dev/ublkb0", 00:14:57.045 "id": 0, 00:14:57.045 "queue_depth": 512, 00:14:57.045 "num_queues": 4, 00:14:57.045 "bdev_name": "Malloc0" 00:14:57.045 }, 00:14:57.045 { 00:14:57.045 "ublk_device": "/dev/ublkb1", 00:14:57.045 "id": 1, 00:14:57.045 "queue_depth": 512, 00:14:57.045 "num_queues": 4, 00:14:57.045 "bdev_name": "Malloc1" 00:14:57.045 }, 00:14:57.045 { 00:14:57.045 "ublk_device": "/dev/ublkb2", 00:14:57.045 "id": 2, 00:14:57.045 "queue_depth": 512, 00:14:57.045 "num_queues": 4, 00:14:57.045 "bdev_name": "Malloc2" 00:14:57.045 }, 00:14:57.045 { 00:14:57.045 "ublk_device": "/dev/ublkb3", 00:14:57.045 "id": 3, 00:14:57.045 "queue_depth": 512, 00:14:57.045 "num_queues": 4, 00:14:57.045 "bdev_name": "Malloc3" 00:14:57.045 } 00:14:57.045 ]' 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.045 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.304 21:56:41 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:57.304 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:57.304 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:57.304 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:57.304 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:57.304 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:57.304 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:57.304 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:57.304 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:57.562 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.821 [2024-09-30 21:56:42.461292] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:57.821 [2024-09-30 21:56:42.493820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:57.821 [2024-09-30 21:56:42.494875] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:57.821 [2024-09-30 21:56:42.497931] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:57.821 [2024-09-30 21:56:42.498173] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:57.821 [2024-09-30 21:56:42.498196] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.821 [2024-09-30 21:56:42.515276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:57.821 [2024-09-30 21:56:42.550207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:57.821 [2024-09-30 21:56:42.551024] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:57.821 [2024-09-30 21:56:42.563225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:57.821 [2024-09-30 21:56:42.563472] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:57.821 [2024-09-30 21:56:42.563488] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.821 [2024-09-30 21:56:42.567397] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:57.821 [2024-09-30 21:56:42.604238] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:57.821 [2024-09-30 21:56:42.604977] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:57.821 [2024-09-30 21:56:42.606494] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:57.821 [2024-09-30 21:56:42.606727] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:57.821 [2024-09-30 21:56:42.606739] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.821 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:57.821 [2024-09-30 21:56:42.617283] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:58.080 [2024-09-30 21:56:42.649244] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:58.080 [2024-09-30 21:56:42.649896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:58.080 [2024-09-30 21:56:42.657220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:58.080 [2024-09-30 21:56:42.657451] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:58.080 [2024-09-30 21:56:42.657462] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:58.080 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.080 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:58.080 [2024-09-30 21:56:42.817258] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:58.080 [2024-09-30 21:56:42.818546] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:58.080 [2024-09-30 21:56:42.818570] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:58.080 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:58.080 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.080 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:58.080 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.080 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.339 21:56:42 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.339 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:58.598 ************************************ 00:14:58.598 END TEST test_create_multi_ublk 00:14:58.598 ************************************ 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:58.598 00:14:58.598 real 0m2.044s 00:14:58.598 user 0m0.772s 00:14:58.598 sys 0m0.147s 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:58.598 21:56:43 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:58.598 21:56:43 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:58.598 21:56:43 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:58.598 21:56:43 ublk -- ublk/ublk.sh@130 -- # killprocess 83894 00:14:58.598 21:56:43 ublk -- common/autotest_common.sh@950 -- # '[' -z 83894 ']' 00:14:58.598 21:56:43 ublk -- common/autotest_common.sh@954 -- # kill -0 83894 00:14:58.598 21:56:43 ublk -- common/autotest_common.sh@955 -- # uname 00:14:58.598 21:56:43 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:58.598 21:56:43 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83894 00:14:58.598 killing process with pid 83894 00:14:58.599 21:56:43 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:58.599 21:56:43 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:58.599 21:56:43 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83894' 00:14:58.599 21:56:43 ublk -- common/autotest_common.sh@969 -- # kill 83894 00:14:58.599 21:56:43 ublk -- common/autotest_common.sh@974 -- # wait 83894 00:14:58.858 [2024-09-30 21:56:43.528641] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:58.858 [2024-09-30 21:56:43.528701] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:59.116 00:14:59.116 real 0m18.290s 00:14:59.116 user 0m22.731s 00:14:59.116 sys 0m5.024s 00:14:59.116 21:56:43 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:59.116 ************************************ 00:14:59.116 END TEST ublk 00:14:59.116 ************************************ 00:14:59.116 21:56:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.376 21:56:43 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:59.376 21:56:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:59.376 21:56:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:59.376 21:56:43 -- common/autotest_common.sh@10 -- # set +x 00:14:59.376 ************************************ 00:14:59.376 START TEST ublk_recovery 00:14:59.376 ************************************ 00:14:59.376 21:56:43 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:59.376 * Looking for test storage... 00:14:59.376 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:59.376 21:56:44 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.376 --rc genhtml_branch_coverage=1 00:14:59.376 --rc genhtml_function_coverage=1 00:14:59.376 --rc genhtml_legend=1 00:14:59.376 --rc geninfo_all_blocks=1 00:14:59.376 --rc geninfo_unexecuted_blocks=1 00:14:59.376 00:14:59.376 ' 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.376 --rc genhtml_branch_coverage=1 00:14:59.376 --rc genhtml_function_coverage=1 00:14:59.376 --rc genhtml_legend=1 00:14:59.376 --rc geninfo_all_blocks=1 00:14:59.376 --rc geninfo_unexecuted_blocks=1 00:14:59.376 00:14:59.376 ' 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.376 --rc genhtml_branch_coverage=1 00:14:59.376 --rc genhtml_function_coverage=1 00:14:59.376 --rc genhtml_legend=1 00:14:59.376 --rc geninfo_all_blocks=1 00:14:59.376 --rc geninfo_unexecuted_blocks=1 00:14:59.376 00:14:59.376 ' 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:59.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:59.376 --rc genhtml_branch_coverage=1 00:14:59.376 --rc genhtml_function_coverage=1 00:14:59.376 --rc genhtml_legend=1 00:14:59.376 --rc geninfo_all_blocks=1 00:14:59.376 --rc geninfo_unexecuted_blocks=1 00:14:59.376 00:14:59.376 ' 00:14:59.376 21:56:44 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:59.376 21:56:44 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:59.376 21:56:44 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:59.376 21:56:44 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:59.376 21:56:44 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:59.376 21:56:44 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:59.376 21:56:44 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:59.376 21:56:44 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:59.376 21:56:44 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:59.376 21:56:44 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:59.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.376 21:56:44 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=84268 00:14:59.376 21:56:44 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:59.376 21:56:44 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 84268 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84268 ']' 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.376 21:56:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:59.376 21:56:44 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:59.376 [2024-09-30 21:56:44.171977] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:14:59.376 [2024-09-30 21:56:44.172105] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84268 ] 00:14:59.635 [2024-09-30 21:56:44.302383] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:59.635 [2024-09-30 21:56:44.319850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:59.635 [2024-09-30 21:56:44.361581] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.635 [2024-09-30 21:56:44.361664] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.202 21:56:45 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.202 21:56:45 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:00.202 21:56:45 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:00.202 21:56:45 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.202 21:56:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.460 [2024-09-30 21:56:45.020209] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:00.460 [2024-09-30 21:56:45.021454] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:00.460 21:56:45 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.460 21:56:45 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:00.460 21:56:45 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.460 21:56:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.460 malloc0 00:15:00.460 21:56:45 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.460 21:56:45 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:00.460 21:56:45 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.460 21:56:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:00.460 [2024-09-30 21:56:45.060331] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:00.460 [2024-09-30 21:56:45.060417] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:00.460 [2024-09-30 21:56:45.060426] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:00.460 [2024-09-30 21:56:45.060432] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:00.460 [2024-09-30 21:56:45.069324] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:00.460 [2024-09-30 21:56:45.069343] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:00.460 [2024-09-30 21:56:45.076216] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:00.460 [2024-09-30 21:56:45.076340] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:00.460 [2024-09-30 21:56:45.091213] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:00.460 1 00:15:00.460 21:56:45 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.460 21:56:45 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:01.395 21:56:46 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=84301 00:15:01.395 21:56:46 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:01.395 21:56:46 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:01.395 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.395 fio-3.35 00:15:01.395 Starting 1 process 00:15:06.657 21:56:51 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 84268 00:15:06.657 21:56:51 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:11.961 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 84268 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:11.961 21:56:56 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=84412 00:15:11.961 21:56:56 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:11.961 21:56:56 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:11.961 21:56:56 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 84412 00:15:11.961 21:56:56 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 84412 ']' 00:15:11.961 21:56:56 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.961 21:56:56 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:11.961 21:56:56 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.961 21:56:56 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:11.961 21:56:56 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:11.961 [2024-09-30 21:56:56.187883] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:15:11.961 [2024-09-30 21:56:56.188022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84412 ] 00:15:11.961 [2024-09-30 21:56:56.318584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:11.961 [2024-09-30 21:56:56.335173] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:11.961 [2024-09-30 21:56:56.377511] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.961 [2024-09-30 21:56:56.377564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.220 21:56:57 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.220 21:56:57 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:15:12.220 21:56:57 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:12.220 21:56:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.220 21:56:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.220 [2024-09-30 21:56:57.029209] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:12.220 [2024-09-30 21:56:57.030476] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:12.220 21:56:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.220 21:56:57 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:12.220 21:56:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.220 21:56:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.478 malloc0 00:15:12.478 21:56:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.478 21:56:57 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:12.478 21:56:57 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.478 21:56:57 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:12.478 [2024-09-30 21:56:57.069331] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:12.478 [2024-09-30 21:56:57.069364] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:12.478 [2024-09-30 21:56:57.069373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:12.478 [2024-09-30 21:56:57.077232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:12.478 [2024-09-30 21:56:57.077257] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:12.478 1 00:15:12.478 21:56:57 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.478 21:56:57 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 84301 00:15:13.413 [2024-09-30 21:56:58.077315] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:13.413 [2024-09-30 21:56:58.084219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:13.413 [2024-09-30 21:56:58.084237] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:14.348 [2024-09-30 21:56:59.084270] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:14.348 [2024-09-30 21:56:59.088222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:14.348 [2024-09-30 21:56:59.088240] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:15.283 [2024-09-30 21:57:00.088268] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:15.283 [2024-09-30 21:57:00.094207] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:15.283 [2024-09-30 21:57:00.094221] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:15.283 [2024-09-30 21:57:00.094230] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:15.283 [2024-09-30 21:57:00.094327] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:37.206 [2024-09-30 21:57:21.540231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:37.206 [2024-09-30 21:57:21.545958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:37.206 [2024-09-30 21:57:21.552437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:37.206 [2024-09-30 21:57:21.552452] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:03.766 00:16:03.766 fio_test: (groupid=0, jobs=1): err= 0: pid=84304: Mon Sep 30 21:57:46 2024 00:16:03.766 read: IOPS=15.1k, BW=59.0MiB/s (61.9MB/s)(3542MiB/60001msec) 00:16:03.766 slat (nsec): min=1127, max=193139, avg=5084.12, stdev=1689.56 00:16:03.766 clat (usec): min=580, max=30456k, avg=4124.01, stdev=251803.59 00:16:03.766 lat (usec): min=585, max=30456k, avg=4129.10, stdev=251803.59 00:16:03.766 clat percentiles (usec): 00:16:03.766 | 1.00th=[ 1647], 5.00th=[ 1729], 10.00th=[ 1745], 20.00th=[ 1778], 00:16:03.766 | 30.00th=[ 1811], 40.00th=[ 1827], 50.00th=[ 1860], 60.00th=[ 1942], 00:16:03.766 | 70.00th=[ 2008], 80.00th=[ 2057], 90.00th=[ 2147], 95.00th=[ 3130], 00:16:03.766 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7832], 99.95th=[12387], 00:16:03.766 | 99.99th=[13435] 00:16:03.766 bw ( KiB/s): min=29280, max=135296, per=100.00%, avg=120900.88, stdev=20180.56, samples=59 00:16:03.766 iops : min= 7320, max=33824, avg=30225.22, stdev=5045.14, samples=59 00:16:03.766 write: IOPS=15.1k, BW=59.0MiB/s (61.8MB/s)(3537MiB/60001msec); 0 zone resets 00:16:03.766 slat (nsec): min=1162, max=193271, avg=5193.97, stdev=1756.23 00:16:03.766 clat (usec): min=625, max=30457k, avg=4340.26, stdev=259978.31 00:16:03.766 lat (usec): min=629, max=30457k, avg=4345.45, stdev=259978.32 00:16:03.766 clat percentiles (usec): 00:16:03.766 | 1.00th=[ 1696], 5.00th=[ 1811], 10.00th=[ 1844], 20.00th=[ 1860], 00:16:03.766 | 30.00th=[ 1893], 40.00th=[ 1909], 50.00th=[ 1942], 60.00th=[ 2040], 00:16:03.766 | 70.00th=[ 2089], 80.00th=[ 2147], 90.00th=[ 2245], 95.00th=[ 3064], 00:16:03.766 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7832], 99.95th=[12649], 00:16:03.766 | 99.99th=[13698] 00:16:03.766 bw ( KiB/s): min=30104, max=134208, per=100.00%, avg=120728.54, stdev=20095.34, samples=59 00:16:03.766 iops : min= 7526, max=33552, avg=30182.14, stdev=5023.84, samples=59 00:16:03.766 lat (usec) : 750=0.01%, 1000=0.01% 00:16:03.766 lat (msec) : 2=62.91%, 4=34.07%, 10=2.96%, 20=0.05%, >=2000=0.01% 00:16:03.766 cpu : usr=3.27%, sys=16.05%, ctx=60836, majf=0, minf=14 00:16:03.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:03.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:03.766 issued rwts: total=906787,905543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.766 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:03.766 00:16:03.766 Run status group 0 (all jobs): 00:16:03.766 READ: bw=59.0MiB/s (61.9MB/s), 59.0MiB/s-59.0MiB/s (61.9MB/s-61.9MB/s), io=3542MiB (3714MB), run=60001-60001msec 00:16:03.766 WRITE: bw=59.0MiB/s (61.8MB/s), 59.0MiB/s-59.0MiB/s (61.8MB/s-61.8MB/s), io=3537MiB (3709MB), run=60001-60001msec 00:16:03.766 00:16:03.766 Disk stats (read/write): 00:16:03.766 ublkb1: ios=903223/902004, merge=0/0, ticks=3684261/3801098, in_queue=7485359, util=99.90% 00:16:03.766 21:57:46 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.766 [2024-09-30 21:57:46.355372] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:03.766 [2024-09-30 21:57:46.395327] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:03.766 [2024-09-30 21:57:46.395547] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:03.766 [2024-09-30 21:57:46.403210] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:03.766 [2024-09-30 21:57:46.403372] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:03.766 [2024-09-30 21:57:46.403394] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.766 21:57:46 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.766 [2024-09-30 21:57:46.419275] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:03.766 [2024-09-30 21:57:46.420653] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:03.766 [2024-09-30 21:57:46.420685] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.766 21:57:46 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:03.766 21:57:46 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:03.766 21:57:46 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 84412 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 84412 ']' 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 84412 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84412 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:03.766 killing process with pid 84412 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84412' 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@969 -- # kill 84412 00:16:03.766 21:57:46 ublk_recovery -- common/autotest_common.sh@974 -- # wait 84412 00:16:03.766 [2024-09-30 21:57:46.690646] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:03.766 [2024-09-30 21:57:46.690709] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:03.766 ************************************ 00:16:03.766 END TEST ublk_recovery 00:16:03.766 ************************************ 00:16:03.766 00:16:03.766 real 1m3.125s 00:16:03.766 user 1m44.440s 00:16:03.766 sys 0m22.855s 00:16:03.766 21:57:47 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.766 21:57:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:03.766 21:57:47 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:03.766 21:57:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:03.766 21:57:47 -- common/autotest_common.sh@10 -- # set +x 00:16:03.766 21:57:47 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:16:03.766 21:57:47 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:03.766 21:57:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:03.766 21:57:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.766 21:57:47 -- common/autotest_common.sh@10 -- # set +x 00:16:03.766 ************************************ 00:16:03.766 START TEST ftl 00:16:03.767 ************************************ 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:03.767 * Looking for test storage... 00:16:03.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:03.767 21:57:47 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.767 21:57:47 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.767 21:57:47 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.767 21:57:47 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.767 21:57:47 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.767 21:57:47 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.767 21:57:47 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.767 21:57:47 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.767 21:57:47 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.767 21:57:47 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.767 21:57:47 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.767 21:57:47 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:03.767 21:57:47 ftl -- scripts/common.sh@345 -- # : 1 00:16:03.767 21:57:47 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.767 21:57:47 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.767 21:57:47 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:03.767 21:57:47 ftl -- scripts/common.sh@353 -- # local d=1 00:16:03.767 21:57:47 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.767 21:57:47 ftl -- scripts/common.sh@355 -- # echo 1 00:16:03.767 21:57:47 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.767 21:57:47 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:03.767 21:57:47 ftl -- scripts/common.sh@353 -- # local d=2 00:16:03.767 21:57:47 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.767 21:57:47 ftl -- scripts/common.sh@355 -- # echo 2 00:16:03.767 21:57:47 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.767 21:57:47 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.767 21:57:47 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.767 21:57:47 ftl -- scripts/common.sh@368 -- # return 0 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:03.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.767 --rc genhtml_branch_coverage=1 00:16:03.767 --rc genhtml_function_coverage=1 00:16:03.767 --rc genhtml_legend=1 00:16:03.767 --rc geninfo_all_blocks=1 00:16:03.767 --rc geninfo_unexecuted_blocks=1 00:16:03.767 00:16:03.767 ' 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:03.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.767 --rc genhtml_branch_coverage=1 00:16:03.767 --rc genhtml_function_coverage=1 00:16:03.767 --rc genhtml_legend=1 00:16:03.767 --rc geninfo_all_blocks=1 00:16:03.767 --rc geninfo_unexecuted_blocks=1 00:16:03.767 00:16:03.767 ' 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:03.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.767 --rc genhtml_branch_coverage=1 00:16:03.767 --rc genhtml_function_coverage=1 00:16:03.767 --rc genhtml_legend=1 00:16:03.767 --rc geninfo_all_blocks=1 00:16:03.767 --rc geninfo_unexecuted_blocks=1 00:16:03.767 00:16:03.767 ' 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:03.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.767 --rc genhtml_branch_coverage=1 00:16:03.767 --rc genhtml_function_coverage=1 00:16:03.767 --rc genhtml_legend=1 00:16:03.767 --rc geninfo_all_blocks=1 00:16:03.767 --rc geninfo_unexecuted_blocks=1 00:16:03.767 00:16:03.767 ' 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:03.767 21:57:47 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:03.767 21:57:47 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:03.767 21:57:47 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:03.767 21:57:47 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:03.767 21:57:47 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:03.767 21:57:47 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:03.767 21:57:47 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:03.767 21:57:47 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:03.767 21:57:47 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:03.767 21:57:47 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:03.767 21:57:47 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:03.767 21:57:47 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:03.767 21:57:47 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:03.767 21:57:47 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:03.767 21:57:47 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:03.767 21:57:47 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:03.767 21:57:47 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:03.767 21:57:47 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:03.767 21:57:47 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:03.767 21:57:47 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:03.767 21:57:47 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:03.767 21:57:47 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:03.767 21:57:47 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:03.767 21:57:47 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:03.767 21:57:47 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:03.767 21:57:47 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:03.767 21:57:47 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:03.767 21:57:47 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:03.767 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:03.767 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:03.767 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:03.767 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:03.767 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=85200 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:03.767 21:57:47 ftl -- ftl/ftl.sh@38 -- # waitforlisten 85200 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@831 -- # '[' -z 85200 ']' 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.767 21:57:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:03.767 [2024-09-30 21:57:47.818391] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:03.767 [2024-09-30 21:57:47.818720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85200 ] 00:16:03.767 [2024-09-30 21:57:47.947615] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:03.767 [2024-09-30 21:57:47.964373] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.767 [2024-09-30 21:57:48.007171] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.026 21:57:48 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.026 21:57:48 ftl -- common/autotest_common.sh@864 -- # return 0 00:16:04.026 21:57:48 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:04.289 21:57:48 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:04.549 21:57:49 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:04.549 21:57:49 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@50 -- # break 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:05.114 21:57:49 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:05.373 21:57:50 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:05.373 21:57:50 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:05.373 21:57:50 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:05.373 21:57:50 ftl -- ftl/ftl.sh@63 -- # break 00:16:05.373 21:57:50 ftl -- ftl/ftl.sh@66 -- # killprocess 85200 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@950 -- # '[' -z 85200 ']' 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@954 -- # kill -0 85200 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@955 -- # uname 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85200 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:05.373 killing process with pid 85200 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85200' 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@969 -- # kill 85200 00:16:05.373 21:57:50 ftl -- common/autotest_common.sh@974 -- # wait 85200 00:16:05.632 21:57:50 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:05.632 21:57:50 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:05.632 21:57:50 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:05.632 21:57:50 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:05.632 21:57:50 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:05.632 ************************************ 00:16:05.632 START TEST ftl_fio_basic 00:16:05.632 ************************************ 00:16:05.632 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:05.892 * Looking for test storage... 00:16:05.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:05.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.892 --rc genhtml_branch_coverage=1 00:16:05.892 --rc genhtml_function_coverage=1 00:16:05.892 --rc genhtml_legend=1 00:16:05.892 --rc geninfo_all_blocks=1 00:16:05.892 --rc geninfo_unexecuted_blocks=1 00:16:05.892 00:16:05.892 ' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:05.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.892 --rc genhtml_branch_coverage=1 00:16:05.892 --rc genhtml_function_coverage=1 00:16:05.892 --rc genhtml_legend=1 00:16:05.892 --rc geninfo_all_blocks=1 00:16:05.892 --rc geninfo_unexecuted_blocks=1 00:16:05.892 00:16:05.892 ' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:05.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.892 --rc genhtml_branch_coverage=1 00:16:05.892 --rc genhtml_function_coverage=1 00:16:05.892 --rc genhtml_legend=1 00:16:05.892 --rc geninfo_all_blocks=1 00:16:05.892 --rc geninfo_unexecuted_blocks=1 00:16:05.892 00:16:05.892 ' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:05.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:05.892 --rc genhtml_branch_coverage=1 00:16:05.892 --rc genhtml_function_coverage=1 00:16:05.892 --rc genhtml_legend=1 00:16:05.892 --rc geninfo_all_blocks=1 00:16:05.892 --rc geninfo_unexecuted_blocks=1 00:16:05.892 00:16:05.892 ' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:05.892 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=85321 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 85321 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 85321 ']' 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:05.893 21:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:05.893 [2024-09-30 21:57:50.674334] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:05.893 [2024-09-30 21:57:50.674612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85321 ] 00:16:06.151 [2024-09-30 21:57:50.804447] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:06.152 [2024-09-30 21:57:50.819644] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:06.152 [2024-09-30 21:57:50.864792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.152 [2024-09-30 21:57:50.865166] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.152 [2024-09-30 21:57:50.865261] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.719 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:06.719 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:16:06.719 21:57:51 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:06.719 21:57:51 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:06.719 21:57:51 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:06.719 21:57:51 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:06.719 21:57:51 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:06.719 21:57:51 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:06.977 21:57:51 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:06.977 21:57:51 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:06.977 21:57:51 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:06.977 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:06.977 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:06.977 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:06.977 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:07.236 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:07.236 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:07.236 { 00:16:07.236 "name": "nvme0n1", 00:16:07.236 "aliases": [ 00:16:07.236 "fe822894-d85e-4ee9-83f7-b340ed8e83e4" 00:16:07.236 ], 00:16:07.236 "product_name": "NVMe disk", 00:16:07.236 "block_size": 4096, 00:16:07.236 "num_blocks": 1310720, 00:16:07.236 "uuid": "fe822894-d85e-4ee9-83f7-b340ed8e83e4", 00:16:07.236 "numa_id": -1, 00:16:07.236 "assigned_rate_limits": { 00:16:07.236 "rw_ios_per_sec": 0, 00:16:07.236 "rw_mbytes_per_sec": 0, 00:16:07.236 "r_mbytes_per_sec": 0, 00:16:07.236 "w_mbytes_per_sec": 0 00:16:07.236 }, 00:16:07.236 "claimed": false, 00:16:07.236 "zoned": false, 00:16:07.236 "supported_io_types": { 00:16:07.236 "read": true, 00:16:07.236 "write": true, 00:16:07.236 "unmap": true, 00:16:07.236 "flush": true, 00:16:07.236 "reset": true, 00:16:07.236 "nvme_admin": true, 00:16:07.236 "nvme_io": true, 00:16:07.236 "nvme_io_md": false, 00:16:07.236 "write_zeroes": true, 00:16:07.236 "zcopy": false, 00:16:07.236 "get_zone_info": false, 00:16:07.236 "zone_management": false, 00:16:07.236 "zone_append": false, 00:16:07.236 "compare": true, 00:16:07.236 "compare_and_write": false, 00:16:07.236 "abort": true, 00:16:07.236 "seek_hole": false, 00:16:07.236 "seek_data": false, 00:16:07.236 "copy": true, 00:16:07.236 "nvme_iov_md": false 00:16:07.236 }, 00:16:07.236 "driver_specific": { 00:16:07.236 "nvme": [ 00:16:07.236 { 00:16:07.236 "pci_address": "0000:00:11.0", 00:16:07.236 "trid": { 00:16:07.236 "trtype": "PCIe", 00:16:07.236 "traddr": "0000:00:11.0" 00:16:07.236 }, 00:16:07.236 "ctrlr_data": { 00:16:07.236 "cntlid": 0, 00:16:07.236 "vendor_id": "0x1b36", 00:16:07.236 "model_number": "QEMU NVMe Ctrl", 00:16:07.236 "serial_number": "12341", 00:16:07.236 "firmware_revision": "8.0.0", 00:16:07.236 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:07.236 "oacs": { 00:16:07.236 "security": 0, 00:16:07.236 "format": 1, 00:16:07.236 "firmware": 0, 00:16:07.236 "ns_manage": 1 00:16:07.236 }, 00:16:07.236 "multi_ctrlr": false, 00:16:07.236 "ana_reporting": false 00:16:07.236 }, 00:16:07.236 "vs": { 00:16:07.236 "nvme_version": "1.4" 00:16:07.236 }, 00:16:07.236 "ns_data": { 00:16:07.236 "id": 1, 00:16:07.236 "can_share": false 00:16:07.236 } 00:16:07.236 } 00:16:07.236 ], 00:16:07.236 "mp_policy": "active_passive" 00:16:07.236 } 00:16:07.236 } 00:16:07.236 ]' 00:16:07.236 21:57:51 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:07.236 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:07.236 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:07.495 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:07.753 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=efdddd32-0639-4940-8d7e-24f3da8beb6e 00:16:07.753 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u efdddd32-0639-4940-8d7e-24f3da8beb6e 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:08.011 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:08.268 { 00:16:08.268 "name": "52f166aa-6b63-470f-b8ba-5f10194d9df8", 00:16:08.268 "aliases": [ 00:16:08.268 "lvs/nvme0n1p0" 00:16:08.268 ], 00:16:08.268 "product_name": "Logical Volume", 00:16:08.268 "block_size": 4096, 00:16:08.268 "num_blocks": 26476544, 00:16:08.268 "uuid": "52f166aa-6b63-470f-b8ba-5f10194d9df8", 00:16:08.268 "assigned_rate_limits": { 00:16:08.268 "rw_ios_per_sec": 0, 00:16:08.268 "rw_mbytes_per_sec": 0, 00:16:08.268 "r_mbytes_per_sec": 0, 00:16:08.268 "w_mbytes_per_sec": 0 00:16:08.268 }, 00:16:08.268 "claimed": false, 00:16:08.268 "zoned": false, 00:16:08.268 "supported_io_types": { 00:16:08.268 "read": true, 00:16:08.268 "write": true, 00:16:08.268 "unmap": true, 00:16:08.268 "flush": false, 00:16:08.268 "reset": true, 00:16:08.268 "nvme_admin": false, 00:16:08.268 "nvme_io": false, 00:16:08.268 "nvme_io_md": false, 00:16:08.268 "write_zeroes": true, 00:16:08.268 "zcopy": false, 00:16:08.268 "get_zone_info": false, 00:16:08.268 "zone_management": false, 00:16:08.268 "zone_append": false, 00:16:08.268 "compare": false, 00:16:08.268 "compare_and_write": false, 00:16:08.268 "abort": false, 00:16:08.268 "seek_hole": true, 00:16:08.268 "seek_data": true, 00:16:08.268 "copy": false, 00:16:08.268 "nvme_iov_md": false 00:16:08.268 }, 00:16:08.268 "driver_specific": { 00:16:08.268 "lvol": { 00:16:08.268 "lvol_store_uuid": "efdddd32-0639-4940-8d7e-24f3da8beb6e", 00:16:08.268 "base_bdev": "nvme0n1", 00:16:08.268 "thin_provision": true, 00:16:08.268 "num_allocated_clusters": 0, 00:16:08.268 "snapshot": false, 00:16:08.268 "clone": false, 00:16:08.268 "esnap_clone": false 00:16:08.268 } 00:16:08.268 } 00:16:08.268 } 00:16:08.268 ]' 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:08.268 21:57:52 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:08.526 21:57:53 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:08.526 21:57:53 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:08.526 21:57:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.526 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.526 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:08.526 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:08.526 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:08.526 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:08.785 { 00:16:08.785 "name": "52f166aa-6b63-470f-b8ba-5f10194d9df8", 00:16:08.785 "aliases": [ 00:16:08.785 "lvs/nvme0n1p0" 00:16:08.785 ], 00:16:08.785 "product_name": "Logical Volume", 00:16:08.785 "block_size": 4096, 00:16:08.785 "num_blocks": 26476544, 00:16:08.785 "uuid": "52f166aa-6b63-470f-b8ba-5f10194d9df8", 00:16:08.785 "assigned_rate_limits": { 00:16:08.785 "rw_ios_per_sec": 0, 00:16:08.785 "rw_mbytes_per_sec": 0, 00:16:08.785 "r_mbytes_per_sec": 0, 00:16:08.785 "w_mbytes_per_sec": 0 00:16:08.785 }, 00:16:08.785 "claimed": false, 00:16:08.785 "zoned": false, 00:16:08.785 "supported_io_types": { 00:16:08.785 "read": true, 00:16:08.785 "write": true, 00:16:08.785 "unmap": true, 00:16:08.785 "flush": false, 00:16:08.785 "reset": true, 00:16:08.785 "nvme_admin": false, 00:16:08.785 "nvme_io": false, 00:16:08.785 "nvme_io_md": false, 00:16:08.785 "write_zeroes": true, 00:16:08.785 "zcopy": false, 00:16:08.785 "get_zone_info": false, 00:16:08.785 "zone_management": false, 00:16:08.785 "zone_append": false, 00:16:08.785 "compare": false, 00:16:08.785 "compare_and_write": false, 00:16:08.785 "abort": false, 00:16:08.785 "seek_hole": true, 00:16:08.785 "seek_data": true, 00:16:08.785 "copy": false, 00:16:08.785 "nvme_iov_md": false 00:16:08.785 }, 00:16:08.785 "driver_specific": { 00:16:08.785 "lvol": { 00:16:08.785 "lvol_store_uuid": "efdddd32-0639-4940-8d7e-24f3da8beb6e", 00:16:08.785 "base_bdev": "nvme0n1", 00:16:08.785 "thin_provision": true, 00:16:08.785 "num_allocated_clusters": 0, 00:16:08.785 "snapshot": false, 00:16:08.785 "clone": false, 00:16:08.785 "esnap_clone": false 00:16:08.785 } 00:16:08.785 } 00:16:08.785 } 00:16:08.785 ]' 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:08.785 21:57:53 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:09.043 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:16:09.043 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52f166aa-6b63-470f-b8ba-5f10194d9df8 00:16:09.301 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:09.301 { 00:16:09.301 "name": "52f166aa-6b63-470f-b8ba-5f10194d9df8", 00:16:09.301 "aliases": [ 00:16:09.301 "lvs/nvme0n1p0" 00:16:09.301 ], 00:16:09.301 "product_name": "Logical Volume", 00:16:09.301 "block_size": 4096, 00:16:09.301 "num_blocks": 26476544, 00:16:09.301 "uuid": "52f166aa-6b63-470f-b8ba-5f10194d9df8", 00:16:09.301 "assigned_rate_limits": { 00:16:09.301 "rw_ios_per_sec": 0, 00:16:09.301 "rw_mbytes_per_sec": 0, 00:16:09.301 "r_mbytes_per_sec": 0, 00:16:09.301 "w_mbytes_per_sec": 0 00:16:09.301 }, 00:16:09.301 "claimed": false, 00:16:09.301 "zoned": false, 00:16:09.301 "supported_io_types": { 00:16:09.301 "read": true, 00:16:09.301 "write": true, 00:16:09.301 "unmap": true, 00:16:09.301 "flush": false, 00:16:09.301 "reset": true, 00:16:09.301 "nvme_admin": false, 00:16:09.301 "nvme_io": false, 00:16:09.301 "nvme_io_md": false, 00:16:09.301 "write_zeroes": true, 00:16:09.301 "zcopy": false, 00:16:09.301 "get_zone_info": false, 00:16:09.301 "zone_management": false, 00:16:09.301 "zone_append": false, 00:16:09.301 "compare": false, 00:16:09.301 "compare_and_write": false, 00:16:09.301 "abort": false, 00:16:09.301 "seek_hole": true, 00:16:09.302 "seek_data": true, 00:16:09.302 "copy": false, 00:16:09.302 "nvme_iov_md": false 00:16:09.302 }, 00:16:09.302 "driver_specific": { 00:16:09.302 "lvol": { 00:16:09.302 "lvol_store_uuid": "efdddd32-0639-4940-8d7e-24f3da8beb6e", 00:16:09.302 "base_bdev": "nvme0n1", 00:16:09.302 "thin_provision": true, 00:16:09.302 "num_allocated_clusters": 0, 00:16:09.302 "snapshot": false, 00:16:09.302 "clone": false, 00:16:09.302 "esnap_clone": false 00:16:09.302 } 00:16:09.302 } 00:16:09.302 } 00:16:09.302 ]' 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:09.302 21:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 52f166aa-6b63-470f-b8ba-5f10194d9df8 -c nvc0n1p0 --l2p_dram_limit 60 00:16:09.561 [2024-09-30 21:57:54.160898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.161094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:09.561 [2024-09-30 21:57:54.161123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:09.561 [2024-09-30 21:57:54.161131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.161207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.161226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:09.561 [2024-09-30 21:57:54.161254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:16:09.561 [2024-09-30 21:57:54.161263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.161296] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:09.561 [2024-09-30 21:57:54.161528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:09.561 [2024-09-30 21:57:54.161542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.161549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:09.561 [2024-09-30 21:57:54.161559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:16:09.561 [2024-09-30 21:57:54.161567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.161627] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c0f4d983-aba6-4f69-a8d0-aa1de43aa910 00:16:09.561 [2024-09-30 21:57:54.162945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.162969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:09.561 [2024-09-30 21:57:54.162978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:09.561 [2024-09-30 21:57:54.162987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.169907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.169947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:09.561 [2024-09-30 21:57:54.169955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:16:09.561 [2024-09-30 21:57:54.169964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.170051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.170061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:09.561 [2024-09-30 21:57:54.170069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:16:09.561 [2024-09-30 21:57:54.170077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.170143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.170154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:09.561 [2024-09-30 21:57:54.170160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:09.561 [2024-09-30 21:57:54.170168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.170202] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:09.561 [2024-09-30 21:57:54.171845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.171967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:09.561 [2024-09-30 21:57:54.171983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:16:09.561 [2024-09-30 21:57:54.171990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.172030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.172037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:09.561 [2024-09-30 21:57:54.172057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:09.561 [2024-09-30 21:57:54.172063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.172084] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:09.561 [2024-09-30 21:57:54.172230] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:09.561 [2024-09-30 21:57:54.172243] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:09.561 [2024-09-30 21:57:54.172261] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:09.561 [2024-09-30 21:57:54.172272] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:09.561 [2024-09-30 21:57:54.172279] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:09.561 [2024-09-30 21:57:54.172290] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:09.561 [2024-09-30 21:57:54.172297] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:09.561 [2024-09-30 21:57:54.172305] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:09.561 [2024-09-30 21:57:54.172311] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:09.561 [2024-09-30 21:57:54.172320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.172331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:09.561 [2024-09-30 21:57:54.172339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:16:09.561 [2024-09-30 21:57:54.172345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.172426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.561 [2024-09-30 21:57:54.172433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:09.561 [2024-09-30 21:57:54.172440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:16:09.561 [2024-09-30 21:57:54.172448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.561 [2024-09-30 21:57:54.172544] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:09.561 [2024-09-30 21:57:54.172553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:09.561 [2024-09-30 21:57:54.172563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:09.561 [2024-09-30 21:57:54.172570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.561 [2024-09-30 21:57:54.172579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:09.561 [2024-09-30 21:57:54.172585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:09.561 [2024-09-30 21:57:54.172592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:09.561 [2024-09-30 21:57:54.172599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:09.561 [2024-09-30 21:57:54.172608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:09.561 [2024-09-30 21:57:54.172614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:09.561 [2024-09-30 21:57:54.172622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:09.561 [2024-09-30 21:57:54.172628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:09.561 [2024-09-30 21:57:54.172638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:09.561 [2024-09-30 21:57:54.172645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:09.561 [2024-09-30 21:57:54.172653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:09.561 [2024-09-30 21:57:54.172659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.561 [2024-09-30 21:57:54.172667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:09.561 [2024-09-30 21:57:54.172673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:09.561 [2024-09-30 21:57:54.172680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.561 [2024-09-30 21:57:54.172688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:09.561 [2024-09-30 21:57:54.172695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:09.562 [2024-09-30 21:57:54.172701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:09.562 [2024-09-30 21:57:54.172708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:09.562 [2024-09-30 21:57:54.172714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:09.562 [2024-09-30 21:57:54.172722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:09.562 [2024-09-30 21:57:54.172740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:09.562 [2024-09-30 21:57:54.172748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:09.562 [2024-09-30 21:57:54.172754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:09.562 [2024-09-30 21:57:54.172763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:09.562 [2024-09-30 21:57:54.172769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:09.562 [2024-09-30 21:57:54.172776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:09.562 [2024-09-30 21:57:54.172782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:09.562 [2024-09-30 21:57:54.172790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:09.562 [2024-09-30 21:57:54.172796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:09.562 [2024-09-30 21:57:54.172803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:09.562 [2024-09-30 21:57:54.172809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:09.562 [2024-09-30 21:57:54.172817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:09.562 [2024-09-30 21:57:54.172822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:09.562 [2024-09-30 21:57:54.172829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:09.562 [2024-09-30 21:57:54.172835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.562 [2024-09-30 21:57:54.172842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:09.562 [2024-09-30 21:57:54.172848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:09.562 [2024-09-30 21:57:54.172855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.562 [2024-09-30 21:57:54.172861] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:09.562 [2024-09-30 21:57:54.172878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:09.562 [2024-09-30 21:57:54.172892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:09.562 [2024-09-30 21:57:54.172900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:09.562 [2024-09-30 21:57:54.172909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:09.562 [2024-09-30 21:57:54.172917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:09.562 [2024-09-30 21:57:54.172923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:09.562 [2024-09-30 21:57:54.172931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:09.562 [2024-09-30 21:57:54.172936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:09.562 [2024-09-30 21:57:54.172943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:09.562 [2024-09-30 21:57:54.172952] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:09.562 [2024-09-30 21:57:54.172960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:09.562 [2024-09-30 21:57:54.172967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:09.562 [2024-09-30 21:57:54.172973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:09.562 [2024-09-30 21:57:54.172981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:09.562 [2024-09-30 21:57:54.172988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:09.562 [2024-09-30 21:57:54.172993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:09.562 [2024-09-30 21:57:54.173001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:09.562 [2024-09-30 21:57:54.173007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:09.562 [2024-09-30 21:57:54.173013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:09.562 [2024-09-30 21:57:54.173018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:09.562 [2024-09-30 21:57:54.173025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:09.562 [2024-09-30 21:57:54.173030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:09.562 [2024-09-30 21:57:54.173037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:09.562 [2024-09-30 21:57:54.173042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:09.562 [2024-09-30 21:57:54.173049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:09.562 [2024-09-30 21:57:54.173054] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:09.562 [2024-09-30 21:57:54.173061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:09.562 [2024-09-30 21:57:54.173067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:09.562 [2024-09-30 21:57:54.173074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:09.562 [2024-09-30 21:57:54.173080] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:09.562 [2024-09-30 21:57:54.173087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:09.562 [2024-09-30 21:57:54.173092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.562 [2024-09-30 21:57:54.173109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:09.562 [2024-09-30 21:57:54.173115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.602 ms 00:16:09.562 [2024-09-30 21:57:54.173122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.562 [2024-09-30 21:57:54.173164] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:09.562 [2024-09-30 21:57:54.173180] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:13.745 [2024-09-30 21:57:57.960484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.745 [2024-09-30 21:57:57.960545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:13.745 [2024-09-30 21:57:57.960560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3787.306 ms 00:16:13.745 [2024-09-30 21:57:57.960571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.745 [2024-09-30 21:57:57.979880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.745 [2024-09-30 21:57:57.979993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:13.745 [2024-09-30 21:57:57.980028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.198 ms 00:16:13.745 [2024-09-30 21:57:57.980063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.745 [2024-09-30 21:57:57.980455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.745 [2024-09-30 21:57:57.980508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:13.745 [2024-09-30 21:57:57.980533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:16:13.745 [2024-09-30 21:57:57.980584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.745 [2024-09-30 21:57:57.992486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.745 [2024-09-30 21:57:57.992533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:13.745 [2024-09-30 21:57:57.992544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.736 ms 00:16:13.745 [2024-09-30 21:57:57.992556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.745 [2024-09-30 21:57:57.992609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.745 [2024-09-30 21:57:57.992619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:13.745 [2024-09-30 21:57:57.992639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:13.746 [2024-09-30 21:57:57.992657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:57.993046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:57.993070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:13.746 [2024-09-30 21:57:57.993079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:16:13.746 [2024-09-30 21:57:57.993091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:57.993257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:57.993270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:13.746 [2024-09-30 21:57:57.993289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:16:13.746 [2024-09-30 21:57:57.993301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:57.998897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:57.998935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:13.746 [2024-09-30 21:57:57.998944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.565 ms 00:16:13.746 [2024-09-30 21:57:57.998953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.007291] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:13.746 [2024-09-30 21:57:58.022021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.022280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:13.746 [2024-09-30 21:57:58.022302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.969 ms 00:16:13.746 [2024-09-30 21:57:58.022310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.073381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.073614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:13.746 [2024-09-30 21:57:58.073638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.013 ms 00:16:13.746 [2024-09-30 21:57:58.073647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.073843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.073856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:13.746 [2024-09-30 21:57:58.073877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:16:13.746 [2024-09-30 21:57:58.073895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.076951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.077080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:13.746 [2024-09-30 21:57:58.077101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:16:13.746 [2024-09-30 21:57:58.077110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.079465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.079496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:13.746 [2024-09-30 21:57:58.079509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.307 ms 00:16:13.746 [2024-09-30 21:57:58.079517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.079831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.079845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:13.746 [2024-09-30 21:57:58.079856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:16:13.746 [2024-09-30 21:57:58.079863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.111002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.111059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:13.746 [2024-09-30 21:57:58.111076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.103 ms 00:16:13.746 [2024-09-30 21:57:58.111084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.115311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.115355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:13.746 [2024-09-30 21:57:58.115368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.161 ms 00:16:13.746 [2024-09-30 21:57:58.115388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.119122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.119315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:13.746 [2024-09-30 21:57:58.119334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:16:13.746 [2024-09-30 21:57:58.119342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.122974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.123109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:13.746 [2024-09-30 21:57:58.123130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:16:13.746 [2024-09-30 21:57:58.123138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.123183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.123203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:13.746 [2024-09-30 21:57:58.123214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:13.746 [2024-09-30 21:57:58.123222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.123297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.746 [2024-09-30 21:57:58.123306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:13.746 [2024-09-30 21:57:58.123315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:13.746 [2024-09-30 21:57:58.123325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.746 [2024-09-30 21:57:58.124308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3962.950 ms, result 0 00:16:13.746 { 00:16:13.746 "name": "ftl0", 00:16:13.746 "uuid": "c0f4d983-aba6-4f69-a8d0-aa1de43aa910" 00:16:13.746 } 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:13.746 [ 00:16:13.746 { 00:16:13.746 "name": "ftl0", 00:16:13.746 "aliases": [ 00:16:13.746 "c0f4d983-aba6-4f69-a8d0-aa1de43aa910" 00:16:13.746 ], 00:16:13.746 "product_name": "FTL disk", 00:16:13.746 "block_size": 4096, 00:16:13.746 "num_blocks": 20971520, 00:16:13.746 "uuid": "c0f4d983-aba6-4f69-a8d0-aa1de43aa910", 00:16:13.746 "assigned_rate_limits": { 00:16:13.746 "rw_ios_per_sec": 0, 00:16:13.746 "rw_mbytes_per_sec": 0, 00:16:13.746 "r_mbytes_per_sec": 0, 00:16:13.746 "w_mbytes_per_sec": 0 00:16:13.746 }, 00:16:13.746 "claimed": false, 00:16:13.746 "zoned": false, 00:16:13.746 "supported_io_types": { 00:16:13.746 "read": true, 00:16:13.746 "write": true, 00:16:13.746 "unmap": true, 00:16:13.746 "flush": true, 00:16:13.746 "reset": false, 00:16:13.746 "nvme_admin": false, 00:16:13.746 "nvme_io": false, 00:16:13.746 "nvme_io_md": false, 00:16:13.746 "write_zeroes": true, 00:16:13.746 "zcopy": false, 00:16:13.746 "get_zone_info": false, 00:16:13.746 "zone_management": false, 00:16:13.746 "zone_append": false, 00:16:13.746 "compare": false, 00:16:13.746 "compare_and_write": false, 00:16:13.746 "abort": false, 00:16:13.746 "seek_hole": false, 00:16:13.746 "seek_data": false, 00:16:13.746 "copy": false, 00:16:13.746 "nvme_iov_md": false 00:16:13.746 }, 00:16:13.746 "driver_specific": { 00:16:13.746 "ftl": { 00:16:13.746 "base_bdev": "52f166aa-6b63-470f-b8ba-5f10194d9df8", 00:16:13.746 "cache": "nvc0n1p0" 00:16:13.746 } 00:16:13.746 } 00:16:13.746 } 00:16:13.746 ] 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:13.746 21:57:58 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:14.006 21:57:58 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:14.006 21:57:58 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:14.006 [2024-09-30 21:57:58.780296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.780350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:14.006 [2024-09-30 21:57:58.780363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:14.006 [2024-09-30 21:57:58.780373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.780426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:14.006 [2024-09-30 21:57:58.780888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.780905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:14.006 [2024-09-30 21:57:58.780930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:16:14.006 [2024-09-30 21:57:58.780940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.781568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.781591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:14.006 [2024-09-30 21:57:58.781602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:16:14.006 [2024-09-30 21:57:58.781611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.784864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.784993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:14.006 [2024-09-30 21:57:58.785011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.222 ms 00:16:14.006 [2024-09-30 21:57:58.785018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.791252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.791360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:14.006 [2024-09-30 21:57:58.791392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.196 ms 00:16:14.006 [2024-09-30 21:57:58.791399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.792983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.793016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:14.006 [2024-09-30 21:57:58.793027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.467 ms 00:16:14.006 [2024-09-30 21:57:58.793034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.797100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.797134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:14.006 [2024-09-30 21:57:58.797146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.022 ms 00:16:14.006 [2024-09-30 21:57:58.797156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.797375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.797392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:14.006 [2024-09-30 21:57:58.797403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:16:14.006 [2024-09-30 21:57:58.797410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.799110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.799238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:14.006 [2024-09-30 21:57:58.799256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.670 ms 00:16:14.006 [2024-09-30 21:57:58.799263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.800859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.006 [2024-09-30 21:57:58.800885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:14.006 [2024-09-30 21:57:58.800895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.542 ms 00:16:14.006 [2024-09-30 21:57:58.800902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.006 [2024-09-30 21:57:58.801825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.007 [2024-09-30 21:57:58.801856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:14.007 [2024-09-30 21:57:58.801867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:16:14.007 [2024-09-30 21:57:58.801873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.007 [2024-09-30 21:57:58.802769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.007 [2024-09-30 21:57:58.802869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:14.007 [2024-09-30 21:57:58.802886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:16:14.007 [2024-09-30 21:57:58.802893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.007 [2024-09-30 21:57:58.802942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:14.007 [2024-09-30 21:57:58.802955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.802967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.802975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.802989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.802997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:14.007 [2024-09-30 21:57:58.803681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:14.008 [2024-09-30 21:57:58.803841] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:14.008 [2024-09-30 21:57:58.803851] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c0f4d983-aba6-4f69-a8d0-aa1de43aa910 00:16:14.008 [2024-09-30 21:57:58.803859] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:14.008 [2024-09-30 21:57:58.803870] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:14.008 [2024-09-30 21:57:58.803877] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:14.008 [2024-09-30 21:57:58.803886] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:14.008 [2024-09-30 21:57:58.803893] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:14.008 [2024-09-30 21:57:58.803903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:14.008 [2024-09-30 21:57:58.803910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:14.008 [2024-09-30 21:57:58.803917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:14.008 [2024-09-30 21:57:58.803923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:14.008 [2024-09-30 21:57:58.803932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.008 [2024-09-30 21:57:58.803939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:14.008 [2024-09-30 21:57:58.803948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:16:14.008 [2024-09-30 21:57:58.803955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.008 [2024-09-30 21:57:58.805557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.008 [2024-09-30 21:57:58.805574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:14.008 [2024-09-30 21:57:58.805585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:16:14.008 [2024-09-30 21:57:58.805593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.008 [2024-09-30 21:57:58.805700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:14.008 [2024-09-30 21:57:58.805709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:14.008 [2024-09-30 21:57:58.805719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:14.008 [2024-09-30 21:57:58.805725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.008 [2024-09-30 21:57:58.811413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.008 [2024-09-30 21:57:58.811451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:14.008 [2024-09-30 21:57:58.811473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.008 [2024-09-30 21:57:58.811491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.008 [2024-09-30 21:57:58.811570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.008 [2024-09-30 21:57:58.811578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:14.008 [2024-09-30 21:57:58.811597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.008 [2024-09-30 21:57:58.811604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.008 [2024-09-30 21:57:58.811701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.008 [2024-09-30 21:57:58.811721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:14.008 [2024-09-30 21:57:58.811731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.008 [2024-09-30 21:57:58.811738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.008 [2024-09-30 21:57:58.811766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.008 [2024-09-30 21:57:58.811774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:14.008 [2024-09-30 21:57:58.811783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.008 [2024-09-30 21:57:58.811790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.821624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.268 [2024-09-30 21:57:58.821802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:14.268 [2024-09-30 21:57:58.821823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.268 [2024-09-30 21:57:58.821830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.829810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.268 [2024-09-30 21:57:58.829948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:14.268 [2024-09-30 21:57:58.830006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.268 [2024-09-30 21:57:58.830029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.830127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.268 [2024-09-30 21:57:58.830159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:14.268 [2024-09-30 21:57:58.830228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.268 [2024-09-30 21:57:58.830252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.830324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.268 [2024-09-30 21:57:58.830367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:14.268 [2024-09-30 21:57:58.830390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.268 [2024-09-30 21:57:58.830441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.830550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.268 [2024-09-30 21:57:58.830580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:14.268 [2024-09-30 21:57:58.830638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.268 [2024-09-30 21:57:58.830660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.830783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.268 [2024-09-30 21:57:58.830821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:14.268 [2024-09-30 21:57:58.830843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.268 [2024-09-30 21:57:58.830900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.830966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.268 [2024-09-30 21:57:58.830991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:14.268 [2024-09-30 21:57:58.831058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.268 [2024-09-30 21:57:58.831117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.831202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:14.268 [2024-09-30 21:57:58.831310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:14.268 [2024-09-30 21:57:58.831337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:14.268 [2024-09-30 21:57:58.831356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:14.268 [2024-09-30 21:57:58.831543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.221 ms, result 0 00:16:14.268 true 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 85321 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 85321 ']' 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 85321 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85321 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85321' 00:16:14.268 killing process with pid 85321 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 85321 00:16:14.268 21:57:58 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 85321 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:18.458 21:58:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:18.458 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:18.458 fio-3.35 00:16:18.458 Starting 1 thread 00:16:21.737 00:16:21.737 test: (groupid=0, jobs=1): err= 0: pid=85491: Mon Sep 30 21:58:06 2024 00:16:21.737 read: IOPS=1392, BW=92.5MiB/s (97.0MB/s)(255MiB/2752msec) 00:16:21.737 slat (usec): min=3, max=105, avg= 4.66, stdev= 2.66 00:16:21.737 clat (usec): min=247, max=800, avg=322.78, stdev=39.76 00:16:21.737 lat (usec): min=251, max=803, avg=327.45, stdev=40.59 00:16:21.737 clat percentiles (usec): 00:16:21.737 | 1.00th=[ 273], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 306], 00:16:21.737 | 30.00th=[ 310], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:16:21.737 | 70.00th=[ 318], 80.00th=[ 326], 90.00th=[ 343], 95.00th=[ 412], 00:16:21.737 | 99.00th=[ 494], 99.50th=[ 545], 99.90th=[ 652], 99.95th=[ 734], 00:16:21.737 | 99.99th=[ 799] 00:16:21.737 write: IOPS=1402, BW=93.1MiB/s (97.7MB/s)(256MiB/2749msec); 0 zone resets 00:16:21.737 slat (usec): min=13, max=113, avg=20.22, stdev= 5.02 00:16:21.737 clat (usec): min=284, max=1024, avg=354.04, stdev=55.48 00:16:21.737 lat (usec): min=305, max=1082, avg=374.26, stdev=56.27 00:16:21.737 clat percentiles (usec): 00:16:21.737 | 1.00th=[ 306], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:16:21.737 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 338], 60.00th=[ 343], 00:16:21.737 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 396], 95.00th=[ 412], 00:16:21.737 | 99.00th=[ 644], 99.50th=[ 709], 99.90th=[ 898], 99.95th=[ 988], 00:16:21.737 | 99.99th=[ 1029] 00:16:21.737 bw ( KiB/s): min=93704, max=96832, per=100.00%, avg=95390.40, stdev=1341.51, samples=5 00:16:21.737 iops : min= 1378, max= 1424, avg=1402.80, stdev=19.73, samples=5 00:16:21.737 lat (usec) : 250=0.01%, 500=98.05%, 750=1.78%, 1000=0.14% 00:16:21.737 lat (msec) : 2=0.01% 00:16:21.737 cpu : usr=99.38%, sys=0.00%, ctx=5, majf=0, minf=1181 00:16:21.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:21.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.737 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:21.737 00:16:21.737 Run status group 0 (all jobs): 00:16:21.737 READ: bw=92.5MiB/s (97.0MB/s), 92.5MiB/s-92.5MiB/s (97.0MB/s-97.0MB/s), io=255MiB (267MB), run=2752-2752msec 00:16:21.737 WRITE: bw=93.1MiB/s (97.7MB/s), 93.1MiB/s-93.1MiB/s (97.7MB/s-97.7MB/s), io=256MiB (269MB), run=2749-2749msec 00:16:21.995 ----------------------------------------------------- 00:16:21.995 Suppressions used: 00:16:21.995 count bytes template 00:16:21.995 1 5 /usr/src/fio/parse.c 00:16:21.995 1 8 libtcmalloc_minimal.so 00:16:21.995 1 904 libcrypto.so 00:16:21.995 ----------------------------------------------------- 00:16:21.995 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:21.995 21:58:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:22.253 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:22.253 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:22.253 fio-3.35 00:16:22.253 Starting 2 threads 00:16:48.835 00:16:48.835 first_half: (groupid=0, jobs=1): err= 0: pid=85572: Mon Sep 30 21:58:29 2024 00:16:48.835 read: IOPS=3023, BW=11.8MiB/s (12.4MB/s)(255MiB/21568msec) 00:16:48.835 slat (usec): min=3, max=1448, avg= 4.39, stdev= 5.77 00:16:48.835 clat (usec): min=634, max=223059, avg=33967.78, stdev=17128.22 00:16:48.835 lat (usec): min=638, max=223063, avg=33972.17, stdev=17128.37 00:16:48.835 clat percentiles (msec): 00:16:48.835 | 1.00th=[ 10], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 28], 00:16:48.835 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:16:48.835 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 53], 00:16:48.835 | 99.00th=[ 136], 99.50th=[ 146], 99.90th=[ 161], 99.95th=[ 174], 00:16:48.835 | 99.99th=[ 215] 00:16:48.835 write: IOPS=4376, BW=17.1MiB/s (17.9MB/s)(256MiB/14976msec); 0 zone resets 00:16:48.835 slat (usec): min=3, max=233, avg= 6.10, stdev= 3.27 00:16:48.835 clat (usec): min=343, max=74175, avg=8308.78, stdev=13704.17 00:16:48.835 lat (usec): min=352, max=74180, avg=8314.88, stdev=13704.19 00:16:48.835 clat percentiles (usec): 00:16:48.835 | 1.00th=[ 668], 5.00th=[ 857], 10.00th=[ 963], 20.00th=[ 1172], 00:16:48.835 | 30.00th=[ 2040], 40.00th=[ 3261], 50.00th=[ 4424], 60.00th=[ 5342], 00:16:48.835 | 70.00th=[ 6063], 80.00th=[ 9503], 90.00th=[13304], 95.00th=[54789], 00:16:48.835 | 99.00th=[64750], 99.50th=[67634], 99.90th=[71828], 99.95th=[71828], 00:16:48.835 | 99.99th=[72877] 00:16:48.835 bw ( KiB/s): min= 5848, max=46264, per=100.00%, avg=29123.11, stdev=13055.85, samples=18 00:16:48.835 iops : min= 1462, max=11566, avg=7280.78, stdev=3263.96, samples=18 00:16:48.835 lat (usec) : 500=0.02%, 750=1.09%, 1000=4.91% 00:16:48.835 lat (msec) : 2=9.15%, 4=8.33%, 10=17.76%, 20=5.59%, 50=47.77% 00:16:48.835 lat (msec) : 100=4.36%, 250=1.00% 00:16:48.835 cpu : usr=99.28%, sys=0.16%, ctx=40, majf=0, minf=5599 00:16:48.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:48.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.835 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.835 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.835 second_half: (groupid=0, jobs=1): err= 0: pid=85573: Mon Sep 30 21:58:29 2024 00:16:48.835 read: IOPS=3004, BW=11.7MiB/s (12.3MB/s)(255MiB/21746msec) 00:16:48.835 slat (nsec): min=2969, max=59186, avg=5165.72, stdev=1148.46 00:16:48.835 clat (usec): min=677, max=263760, avg=33374.34, stdev=19004.17 00:16:48.835 lat (usec): min=682, max=263763, avg=33379.51, stdev=19004.26 00:16:48.835 clat percentiles (msec): 00:16:48.835 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 27], 20.00th=[ 28], 00:16:48.835 | 30.00th=[ 29], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:16:48.835 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 45], 00:16:48.835 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 236], 00:16:48.835 | 99.99th=[ 262] 00:16:48.835 write: IOPS=3326, BW=13.0MiB/s (13.6MB/s)(256MiB/19699msec); 0 zone resets 00:16:48.835 slat (usec): min=3, max=114, avg= 6.77, stdev= 2.77 00:16:48.835 clat (usec): min=325, max=74549, avg=9183.70, stdev=14551.90 00:16:48.835 lat (usec): min=333, max=74555, avg=9190.47, stdev=14552.03 00:16:48.835 clat percentiles (usec): 00:16:48.835 | 1.00th=[ 676], 5.00th=[ 816], 10.00th=[ 979], 20.00th=[ 1418], 00:16:48.835 | 30.00th=[ 2868], 40.00th=[ 3720], 50.00th=[ 4555], 60.00th=[ 5145], 00:16:48.835 | 70.00th=[ 5866], 80.00th=[10159], 90.00th=[26608], 95.00th=[56361], 00:16:48.835 | 99.00th=[65799], 99.50th=[68682], 99.90th=[72877], 99.95th=[72877], 00:16:48.835 | 99.99th=[73925] 00:16:48.835 bw ( KiB/s): min= 432, max=61896, per=82.08%, avg=21845.33, stdev=14338.61, samples=24 00:16:48.835 iops : min= 108, max=15474, avg=5461.33, stdev=3584.65, samples=24 00:16:48.835 lat (usec) : 500=0.03%, 750=1.33%, 1000=3.96% 00:16:48.835 lat (msec) : 2=6.02%, 4=10.66%, 10=19.41%, 20=5.27%, 50=48.27% 00:16:48.835 lat (msec) : 100=3.91%, 250=1.12%, 500=0.02% 00:16:48.835 cpu : usr=99.18%, sys=0.17%, ctx=53, majf=0, minf=5533 00:16:48.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:48.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.835 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:48.835 issued rwts: total=65329,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:48.835 00:16:48.835 Run status group 0 (all jobs): 00:16:48.835 READ: bw=23.4MiB/s (24.6MB/s), 11.7MiB/s-11.8MiB/s (12.3MB/s-12.4MB/s), io=510MiB (535MB), run=21568-21746msec 00:16:48.835 WRITE: bw=26.0MiB/s (27.3MB/s), 13.0MiB/s-17.1MiB/s (13.6MB/s-17.9MB/s), io=512MiB (537MB), run=14976-19699msec 00:16:48.835 ----------------------------------------------------- 00:16:48.835 Suppressions used: 00:16:48.835 count bytes template 00:16:48.835 2 10 /usr/src/fio/parse.c 00:16:48.835 2 192 /usr/src/fio/iolog.c 00:16:48.835 1 8 libtcmalloc_minimal.so 00:16:48.835 1 904 libcrypto.so 00:16:48.835 ----------------------------------------------------- 00:16:48.835 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:48.835 21:58:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:48.835 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:48.835 fio-3.35 00:16:48.835 Starting 1 thread 00:16:58.798 00:16:58.798 test: (groupid=0, jobs=1): err= 0: pid=85855: Mon Sep 30 21:58:43 2024 00:16:58.798 read: IOPS=8258, BW=32.3MiB/s (33.8MB/s)(255MiB/7895msec) 00:16:58.798 slat (nsec): min=3044, max=50665, avg=4231.85, stdev=1305.01 00:16:58.798 clat (usec): min=484, max=34105, avg=15490.00, stdev=1392.19 00:16:58.798 lat (usec): min=489, max=34108, avg=15494.24, stdev=1392.42 00:16:58.798 clat percentiles (usec): 00:16:58.798 | 1.00th=[14222], 5.00th=[14484], 10.00th=[14615], 20.00th=[14746], 00:16:58.798 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15270], 60.00th=[15401], 00:16:58.798 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16188], 95.00th=[17171], 00:16:58.798 | 99.00th=[22152], 99.50th=[22938], 99.90th=[29754], 99.95th=[32113], 00:16:58.798 | 99.99th=[33817] 00:16:58.798 write: IOPS=16.6k, BW=64.9MiB/s (68.0MB/s)(256MiB/3946msec); 0 zone resets 00:16:58.798 slat (usec): min=4, max=649, avg= 5.91, stdev= 3.28 00:16:58.798 clat (usec): min=504, max=44779, avg=7663.54, stdev=9489.00 00:16:58.798 lat (usec): min=509, max=44785, avg=7669.46, stdev=9488.96 00:16:58.798 clat percentiles (usec): 00:16:58.798 | 1.00th=[ 627], 5.00th=[ 725], 10.00th=[ 824], 20.00th=[ 955], 00:16:58.798 | 30.00th=[ 1074], 40.00th=[ 1352], 50.00th=[ 5211], 60.00th=[ 5932], 00:16:58.798 | 70.00th=[ 6915], 80.00th=[ 8160], 90.00th=[27919], 95.00th=[29230], 00:16:58.798 | 99.00th=[31065], 99.50th=[33162], 99.90th=[39060], 99.95th=[39060], 00:16:58.798 | 99.99th=[43779] 00:16:58.798 bw ( KiB/s): min=51728, max=87040, per=98.65%, avg=65536.00, stdev=12392.66, samples=8 00:16:58.798 iops : min=12932, max=21760, avg=16384.00, stdev=3098.16, samples=8 00:16:58.798 lat (usec) : 500=0.01%, 750=3.12%, 1000=9.04% 00:16:58.798 lat (msec) : 2=8.53%, 4=0.50%, 10=20.62%, 20=49.22%, 50=8.96% 00:16:58.798 cpu : usr=99.19%, sys=0.13%, ctx=21, majf=0, minf=5577 00:16:58.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:58.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:58.798 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:58.798 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:58.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:58.798 00:16:58.798 Run status group 0 (all jobs): 00:16:58.798 READ: bw=32.3MiB/s (33.8MB/s), 32.3MiB/s-32.3MiB/s (33.8MB/s-33.8MB/s), io=255MiB (267MB), run=7895-7895msec 00:16:58.798 WRITE: bw=64.9MiB/s (68.0MB/s), 64.9MiB/s-64.9MiB/s (68.0MB/s-68.0MB/s), io=256MiB (268MB), run=3946-3946msec 00:16:59.364 ----------------------------------------------------- 00:16:59.364 Suppressions used: 00:16:59.364 count bytes template 00:16:59.364 1 5 /usr/src/fio/parse.c 00:16:59.364 2 192 /usr/src/fio/iolog.c 00:16:59.364 1 8 libtcmalloc_minimal.so 00:16:59.364 1 904 libcrypto.so 00:16:59.364 ----------------------------------------------------- 00:16:59.364 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:59.364 Remove shared memory files 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:59.364 21:58:44 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid70772 /dev/shm/spdk_tgt_trace.pid84268 00:16:59.623 21:58:44 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:59.623 21:58:44 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:59.623 ************************************ 00:16:59.623 END TEST ftl_fio_basic 00:16:59.623 ************************************ 00:16:59.623 00:16:59.623 real 0m53.746s 00:16:59.623 user 1m58.445s 00:16:59.623 sys 0m2.522s 00:16:59.623 21:58:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.623 21:58:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:59.623 21:58:44 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:59.623 21:58:44 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:59.623 21:58:44 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:59.623 21:58:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:59.623 ************************************ 00:16:59.623 START TEST ftl_bdevperf 00:16:59.623 ************************************ 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:59.623 * Looking for test storage... 00:16:59.623 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:59.623 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:59.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.624 --rc genhtml_branch_coverage=1 00:16:59.624 --rc genhtml_function_coverage=1 00:16:59.624 --rc genhtml_legend=1 00:16:59.624 --rc geninfo_all_blocks=1 00:16:59.624 --rc geninfo_unexecuted_blocks=1 00:16:59.624 00:16:59.624 ' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:59.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.624 --rc genhtml_branch_coverage=1 00:16:59.624 --rc genhtml_function_coverage=1 00:16:59.624 --rc genhtml_legend=1 00:16:59.624 --rc geninfo_all_blocks=1 00:16:59.624 --rc geninfo_unexecuted_blocks=1 00:16:59.624 00:16:59.624 ' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:59.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.624 --rc genhtml_branch_coverage=1 00:16:59.624 --rc genhtml_function_coverage=1 00:16:59.624 --rc genhtml_legend=1 00:16:59.624 --rc geninfo_all_blocks=1 00:16:59.624 --rc geninfo_unexecuted_blocks=1 00:16:59.624 00:16:59.624 ' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:59.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:59.624 --rc genhtml_branch_coverage=1 00:16:59.624 --rc genhtml_function_coverage=1 00:16:59.624 --rc genhtml_legend=1 00:16:59.624 --rc geninfo_all_blocks=1 00:16:59.624 --rc geninfo_unexecuted_blocks=1 00:16:59.624 00:16:59.624 ' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=86065 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 86065 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 86065 ']' 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.624 21:58:44 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:59.883 [2024-09-30 21:58:44.449137] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:16:59.883 [2024-09-30 21:58:44.449424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86065 ] 00:16:59.883 [2024-09-30 21:58:44.579413] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:59.883 [2024-09-30 21:58:44.595743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.883 [2024-09-30 21:58:44.626986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:00.817 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:01.076 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:01.076 { 00:17:01.076 "name": "nvme0n1", 00:17:01.076 "aliases": [ 00:17:01.076 "ffcb8b03-779c-4150-874c-f909290010b0" 00:17:01.076 ], 00:17:01.076 "product_name": "NVMe disk", 00:17:01.076 "block_size": 4096, 00:17:01.076 "num_blocks": 1310720, 00:17:01.076 "uuid": "ffcb8b03-779c-4150-874c-f909290010b0", 00:17:01.076 "numa_id": -1, 00:17:01.076 "assigned_rate_limits": { 00:17:01.076 "rw_ios_per_sec": 0, 00:17:01.076 "rw_mbytes_per_sec": 0, 00:17:01.076 "r_mbytes_per_sec": 0, 00:17:01.076 "w_mbytes_per_sec": 0 00:17:01.076 }, 00:17:01.076 "claimed": true, 00:17:01.076 "claim_type": "read_many_write_one", 00:17:01.076 "zoned": false, 00:17:01.076 "supported_io_types": { 00:17:01.076 "read": true, 00:17:01.076 "write": true, 00:17:01.076 "unmap": true, 00:17:01.076 "flush": true, 00:17:01.076 "reset": true, 00:17:01.076 "nvme_admin": true, 00:17:01.076 "nvme_io": true, 00:17:01.076 "nvme_io_md": false, 00:17:01.076 "write_zeroes": true, 00:17:01.076 "zcopy": false, 00:17:01.076 "get_zone_info": false, 00:17:01.076 "zone_management": false, 00:17:01.076 "zone_append": false, 00:17:01.076 "compare": true, 00:17:01.076 "compare_and_write": false, 00:17:01.076 "abort": true, 00:17:01.076 "seek_hole": false, 00:17:01.076 "seek_data": false, 00:17:01.076 "copy": true, 00:17:01.076 "nvme_iov_md": false 00:17:01.076 }, 00:17:01.076 "driver_specific": { 00:17:01.076 "nvme": [ 00:17:01.076 { 00:17:01.076 "pci_address": "0000:00:11.0", 00:17:01.076 "trid": { 00:17:01.076 "trtype": "PCIe", 00:17:01.076 "traddr": "0000:00:11.0" 00:17:01.076 }, 00:17:01.076 "ctrlr_data": { 00:17:01.076 "cntlid": 0, 00:17:01.076 "vendor_id": "0x1b36", 00:17:01.076 "model_number": "QEMU NVMe Ctrl", 00:17:01.077 "serial_number": "12341", 00:17:01.077 "firmware_revision": "8.0.0", 00:17:01.077 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:01.077 "oacs": { 00:17:01.077 "security": 0, 00:17:01.077 "format": 1, 00:17:01.077 "firmware": 0, 00:17:01.077 "ns_manage": 1 00:17:01.077 }, 00:17:01.077 "multi_ctrlr": false, 00:17:01.077 "ana_reporting": false 00:17:01.077 }, 00:17:01.077 "vs": { 00:17:01.077 "nvme_version": "1.4" 00:17:01.077 }, 00:17:01.077 "ns_data": { 00:17:01.077 "id": 1, 00:17:01.077 "can_share": false 00:17:01.077 } 00:17:01.077 } 00:17:01.077 ], 00:17:01.077 "mp_policy": "active_passive" 00:17:01.077 } 00:17:01.077 } 00:17:01.077 ]' 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:01.077 21:58:45 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:01.335 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=efdddd32-0639-4940-8d7e-24f3da8beb6e 00:17:01.335 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:01.335 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u efdddd32-0639-4940-8d7e-24f3da8beb6e 00:17:01.593 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:01.851 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b6bc14df-ab1d-439d-bcea-545f1affd892 00:17:01.851 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b6bc14df-ab1d-439d-bcea-545f1affd892 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.109 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:02.109 { 00:17:02.109 "name": "8366a977-04db-4f60-95ac-924e14b69abb", 00:17:02.109 "aliases": [ 00:17:02.109 "lvs/nvme0n1p0" 00:17:02.109 ], 00:17:02.109 "product_name": "Logical Volume", 00:17:02.109 "block_size": 4096, 00:17:02.109 "num_blocks": 26476544, 00:17:02.109 "uuid": "8366a977-04db-4f60-95ac-924e14b69abb", 00:17:02.109 "assigned_rate_limits": { 00:17:02.109 "rw_ios_per_sec": 0, 00:17:02.110 "rw_mbytes_per_sec": 0, 00:17:02.110 "r_mbytes_per_sec": 0, 00:17:02.110 "w_mbytes_per_sec": 0 00:17:02.110 }, 00:17:02.110 "claimed": false, 00:17:02.110 "zoned": false, 00:17:02.110 "supported_io_types": { 00:17:02.110 "read": true, 00:17:02.110 "write": true, 00:17:02.110 "unmap": true, 00:17:02.110 "flush": false, 00:17:02.110 "reset": true, 00:17:02.110 "nvme_admin": false, 00:17:02.110 "nvme_io": false, 00:17:02.110 "nvme_io_md": false, 00:17:02.110 "write_zeroes": true, 00:17:02.110 "zcopy": false, 00:17:02.110 "get_zone_info": false, 00:17:02.110 "zone_management": false, 00:17:02.110 "zone_append": false, 00:17:02.110 "compare": false, 00:17:02.110 "compare_and_write": false, 00:17:02.110 "abort": false, 00:17:02.110 "seek_hole": true, 00:17:02.110 "seek_data": true, 00:17:02.110 "copy": false, 00:17:02.110 "nvme_iov_md": false 00:17:02.110 }, 00:17:02.110 "driver_specific": { 00:17:02.110 "lvol": { 00:17:02.110 "lvol_store_uuid": "b6bc14df-ab1d-439d-bcea-545f1affd892", 00:17:02.110 "base_bdev": "nvme0n1", 00:17:02.110 "thin_provision": true, 00:17:02.110 "num_allocated_clusters": 0, 00:17:02.110 "snapshot": false, 00:17:02.110 "clone": false, 00:17:02.110 "esnap_clone": false 00:17:02.110 } 00:17:02.110 } 00:17:02.110 } 00:17:02.110 ]' 00:17:02.110 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:02.110 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:02.110 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:02.368 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:02.368 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:02.368 21:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:02.368 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:02.368 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:02.368 21:58:46 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:02.627 21:58:47 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:02.628 { 00:17:02.628 "name": "8366a977-04db-4f60-95ac-924e14b69abb", 00:17:02.628 "aliases": [ 00:17:02.628 "lvs/nvme0n1p0" 00:17:02.628 ], 00:17:02.628 "product_name": "Logical Volume", 00:17:02.628 "block_size": 4096, 00:17:02.628 "num_blocks": 26476544, 00:17:02.628 "uuid": "8366a977-04db-4f60-95ac-924e14b69abb", 00:17:02.628 "assigned_rate_limits": { 00:17:02.628 "rw_ios_per_sec": 0, 00:17:02.628 "rw_mbytes_per_sec": 0, 00:17:02.628 "r_mbytes_per_sec": 0, 00:17:02.628 "w_mbytes_per_sec": 0 00:17:02.628 }, 00:17:02.628 "claimed": false, 00:17:02.628 "zoned": false, 00:17:02.628 "supported_io_types": { 00:17:02.628 "read": true, 00:17:02.628 "write": true, 00:17:02.628 "unmap": true, 00:17:02.628 "flush": false, 00:17:02.628 "reset": true, 00:17:02.628 "nvme_admin": false, 00:17:02.628 "nvme_io": false, 00:17:02.628 "nvme_io_md": false, 00:17:02.628 "write_zeroes": true, 00:17:02.628 "zcopy": false, 00:17:02.628 "get_zone_info": false, 00:17:02.628 "zone_management": false, 00:17:02.628 "zone_append": false, 00:17:02.628 "compare": false, 00:17:02.628 "compare_and_write": false, 00:17:02.628 "abort": false, 00:17:02.628 "seek_hole": true, 00:17:02.628 "seek_data": true, 00:17:02.628 "copy": false, 00:17:02.628 "nvme_iov_md": false 00:17:02.628 }, 00:17:02.628 "driver_specific": { 00:17:02.628 "lvol": { 00:17:02.628 "lvol_store_uuid": "b6bc14df-ab1d-439d-bcea-545f1affd892", 00:17:02.628 "base_bdev": "nvme0n1", 00:17:02.628 "thin_provision": true, 00:17:02.628 "num_allocated_clusters": 0, 00:17:02.628 "snapshot": false, 00:17:02.628 "clone": false, 00:17:02.628 "esnap_clone": false 00:17:02.628 } 00:17:02.628 } 00:17:02.628 } 00:17:02.628 ]' 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:02.628 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=8366a977-04db-4f60-95ac-924e14b69abb 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:17:02.886 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:17:02.887 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8366a977-04db-4f60-95ac-924e14b69abb 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:03.145 { 00:17:03.145 "name": "8366a977-04db-4f60-95ac-924e14b69abb", 00:17:03.145 "aliases": [ 00:17:03.145 "lvs/nvme0n1p0" 00:17:03.145 ], 00:17:03.145 "product_name": "Logical Volume", 00:17:03.145 "block_size": 4096, 00:17:03.145 "num_blocks": 26476544, 00:17:03.145 "uuid": "8366a977-04db-4f60-95ac-924e14b69abb", 00:17:03.145 "assigned_rate_limits": { 00:17:03.145 "rw_ios_per_sec": 0, 00:17:03.145 "rw_mbytes_per_sec": 0, 00:17:03.145 "r_mbytes_per_sec": 0, 00:17:03.145 "w_mbytes_per_sec": 0 00:17:03.145 }, 00:17:03.145 "claimed": false, 00:17:03.145 "zoned": false, 00:17:03.145 "supported_io_types": { 00:17:03.145 "read": true, 00:17:03.145 "write": true, 00:17:03.145 "unmap": true, 00:17:03.145 "flush": false, 00:17:03.145 "reset": true, 00:17:03.145 "nvme_admin": false, 00:17:03.145 "nvme_io": false, 00:17:03.145 "nvme_io_md": false, 00:17:03.145 "write_zeroes": true, 00:17:03.145 "zcopy": false, 00:17:03.145 "get_zone_info": false, 00:17:03.145 "zone_management": false, 00:17:03.145 "zone_append": false, 00:17:03.145 "compare": false, 00:17:03.145 "compare_and_write": false, 00:17:03.145 "abort": false, 00:17:03.145 "seek_hole": true, 00:17:03.145 "seek_data": true, 00:17:03.145 "copy": false, 00:17:03.145 "nvme_iov_md": false 00:17:03.145 }, 00:17:03.145 "driver_specific": { 00:17:03.145 "lvol": { 00:17:03.145 "lvol_store_uuid": "b6bc14df-ab1d-439d-bcea-545f1affd892", 00:17:03.145 "base_bdev": "nvme0n1", 00:17:03.145 "thin_provision": true, 00:17:03.145 "num_allocated_clusters": 0, 00:17:03.145 "snapshot": false, 00:17:03.145 "clone": false, 00:17:03.145 "esnap_clone": false 00:17:03.145 } 00:17:03.145 } 00:17:03.145 } 00:17:03.145 ]' 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:03.145 21:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8366a977-04db-4f60-95ac-924e14b69abb -c nvc0n1p0 --l2p_dram_limit 20 00:17:03.405 [2024-09-30 21:58:48.110647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.110688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:03.405 [2024-09-30 21:58:48.110699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:03.405 [2024-09-30 21:58:48.110707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.110748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.110758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:03.405 [2024-09-30 21:58:48.110765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:03.405 [2024-09-30 21:58:48.110777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.110790] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:03.405 [2024-09-30 21:58:48.110996] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:03.405 [2024-09-30 21:58:48.111006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.111013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:03.405 [2024-09-30 21:58:48.111019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:17:03.405 [2024-09-30 21:58:48.111028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.111049] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cd2c64ed-c389-4ccf-9e49-0a03702fc960 00:17:03.405 [2024-09-30 21:58:48.112102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.112127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:03.405 [2024-09-30 21:58:48.112136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:03.405 [2024-09-30 21:58:48.112142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.117285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.117311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:03.405 [2024-09-30 21:58:48.117322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.077 ms 00:17:03.405 [2024-09-30 21:58:48.117328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.117396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.117403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:03.405 [2024-09-30 21:58:48.117411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:03.405 [2024-09-30 21:58:48.117420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.117451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.117458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:03.405 [2024-09-30 21:58:48.117465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:03.405 [2024-09-30 21:58:48.117471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.117488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:03.405 [2024-09-30 21:58:48.118789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.118815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:03.405 [2024-09-30 21:58:48.118826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.308 ms 00:17:03.405 [2024-09-30 21:58:48.118835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.118857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.118866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:03.405 [2024-09-30 21:58:48.118872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:03.405 [2024-09-30 21:58:48.118879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.118895] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:03.405 [2024-09-30 21:58:48.119011] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:03.405 [2024-09-30 21:58:48.119022] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:03.405 [2024-09-30 21:58:48.119033] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:03.405 [2024-09-30 21:58:48.119041] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119050] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119056] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:03.405 [2024-09-30 21:58:48.119065] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:03.405 [2024-09-30 21:58:48.119071] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:03.405 [2024-09-30 21:58:48.119077] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:03.405 [2024-09-30 21:58:48.119083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.119091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:03.405 [2024-09-30 21:58:48.119097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:17:03.405 [2024-09-30 21:58:48.119105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.119169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.405 [2024-09-30 21:58:48.119176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:03.405 [2024-09-30 21:58:48.119184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:03.405 [2024-09-30 21:58:48.119200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.405 [2024-09-30 21:58:48.119268] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:03.405 [2024-09-30 21:58:48.119278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:03.405 [2024-09-30 21:58:48.119285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:03.405 [2024-09-30 21:58:48.119306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:03.405 [2024-09-30 21:58:48.119328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.405 [2024-09-30 21:58:48.119339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:03.405 [2024-09-30 21:58:48.119348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:03.405 [2024-09-30 21:58:48.119353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:03.405 [2024-09-30 21:58:48.119360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:03.405 [2024-09-30 21:58:48.119364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:03.405 [2024-09-30 21:58:48.119371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:03.405 [2024-09-30 21:58:48.119384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:03.405 [2024-09-30 21:58:48.119401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:03.405 [2024-09-30 21:58:48.119420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:03.405 [2024-09-30 21:58:48.119437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:03.405 [2024-09-30 21:58:48.119460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:03.405 [2024-09-30 21:58:48.119473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:03.405 [2024-09-30 21:58:48.119479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:03.405 [2024-09-30 21:58:48.119486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.405 [2024-09-30 21:58:48.119491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:03.405 [2024-09-30 21:58:48.119498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:03.405 [2024-09-30 21:58:48.119504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:03.405 [2024-09-30 21:58:48.119511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:03.405 [2024-09-30 21:58:48.119517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:03.405 [2024-09-30 21:58:48.119524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.406 [2024-09-30 21:58:48.119530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:03.406 [2024-09-30 21:58:48.119537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:03.406 [2024-09-30 21:58:48.119543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.406 [2024-09-30 21:58:48.119551] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:03.406 [2024-09-30 21:58:48.119557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:03.406 [2024-09-30 21:58:48.119564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:03.406 [2024-09-30 21:58:48.119570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:03.406 [2024-09-30 21:58:48.119578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:03.406 [2024-09-30 21:58:48.119585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:03.406 [2024-09-30 21:58:48.119592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:03.406 [2024-09-30 21:58:48.119598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:03.406 [2024-09-30 21:58:48.119606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:03.406 [2024-09-30 21:58:48.119612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:03.406 [2024-09-30 21:58:48.119621] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:03.406 [2024-09-30 21:58:48.119629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.406 [2024-09-30 21:58:48.119638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:03.406 [2024-09-30 21:58:48.119645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:03.406 [2024-09-30 21:58:48.119653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:03.406 [2024-09-30 21:58:48.119659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:03.406 [2024-09-30 21:58:48.119668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:03.406 [2024-09-30 21:58:48.119674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:03.406 [2024-09-30 21:58:48.119681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:03.406 [2024-09-30 21:58:48.119688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:03.406 [2024-09-30 21:58:48.119696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:03.406 [2024-09-30 21:58:48.119702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:03.406 [2024-09-30 21:58:48.119709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:03.406 [2024-09-30 21:58:48.119716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:03.406 [2024-09-30 21:58:48.119723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:03.406 [2024-09-30 21:58:48.119729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:03.406 [2024-09-30 21:58:48.119737] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:03.406 [2024-09-30 21:58:48.119746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:03.406 [2024-09-30 21:58:48.119756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:03.406 [2024-09-30 21:58:48.119761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:03.406 [2024-09-30 21:58:48.119768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:03.406 [2024-09-30 21:58:48.119774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:03.406 [2024-09-30 21:58:48.119783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:03.406 [2024-09-30 21:58:48.119791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:03.406 [2024-09-30 21:58:48.119798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:17:03.406 [2024-09-30 21:58:48.119803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:03.406 [2024-09-30 21:58:48.119830] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:03.406 [2024-09-30 21:58:48.119837] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:05.937 [2024-09-30 21:58:50.192120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.192598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:05.937 [2024-09-30 21:58:50.192802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2072.265 ms 00:17:05.937 [2024-09-30 21:58:50.193013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.213058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.213304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:05.937 [2024-09-30 21:58:50.213409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.816 ms 00:17:05.937 [2024-09-30 21:58:50.213447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.213702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.213815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:05.937 [2024-09-30 21:58:50.213905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:05.937 [2024-09-30 21:58:50.214051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.223999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.224050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:05.937 [2024-09-30 21:58:50.224067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.878 ms 00:17:05.937 [2024-09-30 21:58:50.224078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.224112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.224124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:05.937 [2024-09-30 21:58:50.224138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:05.937 [2024-09-30 21:58:50.224151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.224628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.224768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:05.937 [2024-09-30 21:58:50.224797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:17:05.937 [2024-09-30 21:58:50.224809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.224979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.224993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:05.937 [2024-09-30 21:58:50.225008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:05.937 [2024-09-30 21:58:50.225020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.229692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.229724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:05.937 [2024-09-30 21:58:50.229736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.644 ms 00:17:05.937 [2024-09-30 21:58:50.229743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.238026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:05.937 [2024-09-30 21:58:50.243178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.243220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:05.937 [2024-09-30 21:58:50.243230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.382 ms 00:17:05.937 [2024-09-30 21:58:50.243243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.937 [2024-09-30 21:58:50.292307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.937 [2024-09-30 21:58:50.292369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:05.937 [2024-09-30 21:58:50.292381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.042 ms 00:17:05.937 [2024-09-30 21:58:50.292391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.292561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.292581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:05.938 [2024-09-30 21:58:50.292590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:17:05.938 [2024-09-30 21:58:50.292599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.295510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.295547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:05.938 [2024-09-30 21:58:50.295558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.884 ms 00:17:05.938 [2024-09-30 21:58:50.295568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.297806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.297840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:05.938 [2024-09-30 21:58:50.297850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:17:05.938 [2024-09-30 21:58:50.297859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.298146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.298170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:05.938 [2024-09-30 21:58:50.298179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:05.938 [2024-09-30 21:58:50.298205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.323912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.323953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:05.938 [2024-09-30 21:58:50.323964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.690 ms 00:17:05.938 [2024-09-30 21:58:50.323973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.327719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.327760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:05.938 [2024-09-30 21:58:50.327771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:17:05.938 [2024-09-30 21:58:50.327781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.330436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.330566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:05.938 [2024-09-30 21:58:50.330581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:17:05.938 [2024-09-30 21:58:50.330589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.333653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.333773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:05.938 [2024-09-30 21:58:50.333788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.036 ms 00:17:05.938 [2024-09-30 21:58:50.333796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.333826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.333839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:05.938 [2024-09-30 21:58:50.333850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:05.938 [2024-09-30 21:58:50.333859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.333918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:05.938 [2024-09-30 21:58:50.333928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:05.938 [2024-09-30 21:58:50.333936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:05.938 [2024-09-30 21:58:50.333945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:05.938 [2024-09-30 21:58:50.334796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2223.773 ms, result 0 00:17:05.938 { 00:17:05.938 "name": "ftl0", 00:17:05.938 "uuid": "cd2c64ed-c389-4ccf-9e49-0a03702fc960" 00:17:05.938 } 00:17:05.938 21:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:05.938 21:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:05.938 21:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:05.938 21:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:05.938 [2024-09-30 21:58:50.639809] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:05.938 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:05.938 Zero copy mechanism will not be used. 00:17:05.938 Running I/O for 4 seconds... 00:17:10.115 3171.00 IOPS, 210.57 MiB/s 3192.50 IOPS, 212.00 MiB/s 3283.33 IOPS, 218.03 MiB/s 3270.75 IOPS, 217.20 MiB/s 00:17:10.115 Latency(us) 00:17:10.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.115 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:10.115 ftl0 : 4.00 3269.56 217.12 0.00 0.00 321.96 143.36 2243.35 00:17:10.115 =================================================================================================================== 00:17:10.115 Total : 3269.56 217.12 0.00 0.00 321.96 143.36 2243.35 00:17:10.115 [2024-09-30 21:58:54.647885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:10.115 { 00:17:10.115 "results": [ 00:17:10.115 { 00:17:10.115 "job": "ftl0", 00:17:10.115 "core_mask": "0x1", 00:17:10.115 "workload": "randwrite", 00:17:10.115 "status": "finished", 00:17:10.115 "queue_depth": 1, 00:17:10.115 "io_size": 69632, 00:17:10.115 "runtime": 4.001758, 00:17:10.115 "iops": 3269.5630270496117, 00:17:10.115 "mibps": 217.11941976501328, 00:17:10.115 "io_failed": 0, 00:17:10.115 "io_timeout": 0, 00:17:10.115 "avg_latency_us": 321.9595381323049, 00:17:10.115 "min_latency_us": 143.36, 00:17:10.115 "max_latency_us": 2243.3476923076923 00:17:10.115 } 00:17:10.115 ], 00:17:10.115 "core_count": 1 00:17:10.115 } 00:17:10.115 21:58:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:10.115 [2024-09-30 21:58:54.752505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:10.115 Running I/O for 4 seconds... 00:17:14.288 11407.00 IOPS, 44.56 MiB/s 11464.50 IOPS, 44.78 MiB/s 11365.67 IOPS, 44.40 MiB/s 11216.50 IOPS, 43.81 MiB/s 00:17:14.289 Latency(us) 00:17:14.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.289 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.289 ftl0 : 4.01 11208.51 43.78 0.00 0.00 11398.77 285.14 29440.79 00:17:14.289 =================================================================================================================== 00:17:14.289 Total : 11208.51 43.78 0.00 0.00 11398.77 0.00 29440.79 00:17:14.289 [2024-09-30 21:58:58.772708] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:14.289 { 00:17:14.289 "results": [ 00:17:14.289 { 00:17:14.289 "job": "ftl0", 00:17:14.289 "core_mask": "0x1", 00:17:14.289 "workload": "randwrite", 00:17:14.289 "status": "finished", 00:17:14.289 "queue_depth": 128, 00:17:14.289 "io_size": 4096, 00:17:14.289 "runtime": 4.013737, 00:17:14.289 "iops": 11208.507184202652, 00:17:14.289 "mibps": 43.78323118829161, 00:17:14.289 "io_failed": 0, 00:17:14.289 "io_timeout": 0, 00:17:14.289 "avg_latency_us": 11398.773299409757, 00:17:14.289 "min_latency_us": 285.1446153846154, 00:17:14.289 "max_latency_us": 29440.78769230769 00:17:14.289 } 00:17:14.289 ], 00:17:14.289 "core_count": 1 00:17:14.289 } 00:17:14.289 21:58:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:14.289 [2024-09-30 21:58:58.876469] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:14.289 Running I/O for 4 seconds... 00:17:18.110 8870.00 IOPS, 34.65 MiB/s 8926.00 IOPS, 34.87 MiB/s 8978.00 IOPS, 35.07 MiB/s 9008.75 IOPS, 35.19 MiB/s 00:17:18.110 Latency(us) 00:17:18.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.110 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:18.110 Verification LBA range: start 0x0 length 0x1400000 00:17:18.110 ftl0 : 4.01 9020.76 35.24 0.00 0.00 14148.45 207.95 27021.00 00:17:18.110 =================================================================================================================== 00:17:18.110 Total : 9020.76 35.24 0.00 0.00 14148.45 0.00 27021.00 00:17:18.110 [2024-09-30 21:59:02.892166] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:18.110 { 00:17:18.110 "results": [ 00:17:18.110 { 00:17:18.110 "job": "ftl0", 00:17:18.110 "core_mask": "0x1", 00:17:18.110 "workload": "verify", 00:17:18.110 "status": "finished", 00:17:18.110 "verify_range": { 00:17:18.110 "start": 0, 00:17:18.110 "length": 20971520 00:17:18.110 }, 00:17:18.110 "queue_depth": 128, 00:17:18.110 "io_size": 4096, 00:17:18.110 "runtime": 4.008755, 00:17:18.110 "iops": 9020.755820697448, 00:17:18.110 "mibps": 35.23732742459941, 00:17:18.110 "io_failed": 0, 00:17:18.110 "io_timeout": 0, 00:17:18.110 "avg_latency_us": 14148.453232249747, 00:17:18.110 "min_latency_us": 207.95076923076923, 00:17:18.110 "max_latency_us": 27020.996923076924 00:17:18.110 } 00:17:18.110 ], 00:17:18.110 "core_count": 1 00:17:18.110 } 00:17:18.110 21:59:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:18.369 [2024-09-30 21:59:03.092490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.369 [2024-09-30 21:59:03.092532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:18.369 [2024-09-30 21:59:03.092543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:18.369 [2024-09-30 21:59:03.092553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.369 [2024-09-30 21:59:03.092573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:18.369 [2024-09-30 21:59:03.092996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.369 [2024-09-30 21:59:03.093011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:18.369 [2024-09-30 21:59:03.093032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:17:18.369 [2024-09-30 21:59:03.093040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.369 [2024-09-30 21:59:03.094847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.369 [2024-09-30 21:59:03.094882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:18.369 [2024-09-30 21:59:03.094895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:17:18.369 [2024-09-30 21:59:03.094903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.232724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.232756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:18.629 [2024-09-30 21:59:03.232769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 137.801 ms 00:17:18.629 [2024-09-30 21:59:03.232777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.238910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.238936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:18.629 [2024-09-30 21:59:03.238948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:17:18.629 [2024-09-30 21:59:03.238957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.240010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.240133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:18.629 [2024-09-30 21:59:03.240150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:17:18.629 [2024-09-30 21:59:03.240158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.244008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.244040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:18.629 [2024-09-30 21:59:03.244059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.794 ms 00:17:18.629 [2024-09-30 21:59:03.244066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.244171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.244180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:18.629 [2024-09-30 21:59:03.244205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:18.629 [2024-09-30 21:59:03.244213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.246238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.246351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:18.629 [2024-09-30 21:59:03.246368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.006 ms 00:17:18.629 [2024-09-30 21:59:03.246375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.247745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.247774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:18.629 [2024-09-30 21:59:03.247785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:17:18.629 [2024-09-30 21:59:03.247791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.248669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.248697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:18.629 [2024-09-30 21:59:03.248710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.847 ms 00:17:18.629 [2024-09-30 21:59:03.248717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.249617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.629 [2024-09-30 21:59:03.249721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:18.629 [2024-09-30 21:59:03.249738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:17:18.629 [2024-09-30 21:59:03.249744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.629 [2024-09-30 21:59:03.249771] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:18.629 [2024-09-30 21:59:03.249785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.249997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:18.630 [2024-09-30 21:59:03.250568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:18.631 [2024-09-30 21:59:03.250648] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:18.631 [2024-09-30 21:59:03.250657] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cd2c64ed-c389-4ccf-9e49-0a03702fc960 00:17:18.631 [2024-09-30 21:59:03.250666] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:18.631 [2024-09-30 21:59:03.250675] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:18.631 [2024-09-30 21:59:03.250681] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:18.631 [2024-09-30 21:59:03.250693] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:18.631 [2024-09-30 21:59:03.250700] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:18.631 [2024-09-30 21:59:03.250710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:18.631 [2024-09-30 21:59:03.250716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:18.631 [2024-09-30 21:59:03.250724] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:18.631 [2024-09-30 21:59:03.250730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:18.631 [2024-09-30 21:59:03.250738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.631 [2024-09-30 21:59:03.250746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:18.631 [2024-09-30 21:59:03.250757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:17:18.631 [2024-09-30 21:59:03.250764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.252149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.631 [2024-09-30 21:59:03.252173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:18.631 [2024-09-30 21:59:03.252183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.368 ms 00:17:18.631 [2024-09-30 21:59:03.252208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.252285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:18.631 [2024-09-30 21:59:03.252293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:18.631 [2024-09-30 21:59:03.252304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:18.631 [2024-09-30 21:59:03.252312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.256969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.257062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:18.631 [2024-09-30 21:59:03.257132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.257154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.257227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.257312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:18.631 [2024-09-30 21:59:03.257338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.257359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.257431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.257517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:18.631 [2024-09-30 21:59:03.257543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.257563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.257592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.257680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:18.631 [2024-09-30 21:59:03.257707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.257726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.266341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.266465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:18.631 [2024-09-30 21:59:03.266519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.266546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.274178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.274306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.631 [2024-09-30 21:59:03.274356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.274378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.274439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.274547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.631 [2024-09-30 21:59:03.274573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.274592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.274653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.274760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.631 [2024-09-30 21:59:03.274788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.274797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.274867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.274884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.631 [2024-09-30 21:59:03.274894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.274902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.274932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.274941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.631 [2024-09-30 21:59:03.274949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.274957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.274991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.275000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.631 [2024-09-30 21:59:03.275010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.275017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.275055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.631 [2024-09-30 21:59:03.275064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.631 [2024-09-30 21:59:03.275076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.631 [2024-09-30 21:59:03.275083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.631 [2024-09-30 21:59:03.275355] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 182.670 ms, result 0 00:17:18.631 true 00:17:18.631 21:59:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 86065 00:17:18.631 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 86065 ']' 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 86065 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86065 00:17:18.632 killing process with pid 86065 00:17:18.632 Received shutdown signal, test time was about 4.000000 seconds 00:17:18.632 00:17:18.632 Latency(us) 00:17:18.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.632 =================================================================================================================== 00:17:18.632 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86065' 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 86065 00:17:18.632 21:59:03 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 86065 00:17:25.192 Remove shared memory files 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:25.192 ************************************ 00:17:25.192 END TEST ftl_bdevperf 00:17:25.192 ************************************ 00:17:25.192 00:17:25.192 real 0m25.313s 00:17:25.192 user 0m27.985s 00:17:25.192 sys 0m0.871s 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.192 21:59:09 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:25.192 21:59:09 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:25.192 21:59:09 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:25.192 21:59:09 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.192 21:59:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:25.192 ************************************ 00:17:25.192 START TEST ftl_trim 00:17:25.192 ************************************ 00:17:25.192 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:25.192 * Looking for test storage... 00:17:25.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.192 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:25.192 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:17:25.192 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:25.192 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.192 21:59:09 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.193 21:59:09 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.193 --rc genhtml_branch_coverage=1 00:17:25.193 --rc genhtml_function_coverage=1 00:17:25.193 --rc genhtml_legend=1 00:17:25.193 --rc geninfo_all_blocks=1 00:17:25.193 --rc geninfo_unexecuted_blocks=1 00:17:25.193 00:17:25.193 ' 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.193 --rc genhtml_branch_coverage=1 00:17:25.193 --rc genhtml_function_coverage=1 00:17:25.193 --rc genhtml_legend=1 00:17:25.193 --rc geninfo_all_blocks=1 00:17:25.193 --rc geninfo_unexecuted_blocks=1 00:17:25.193 00:17:25.193 ' 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.193 --rc genhtml_branch_coverage=1 00:17:25.193 --rc genhtml_function_coverage=1 00:17:25.193 --rc genhtml_legend=1 00:17:25.193 --rc geninfo_all_blocks=1 00:17:25.193 --rc geninfo_unexecuted_blocks=1 00:17:25.193 00:17:25.193 ' 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:25.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.193 --rc genhtml_branch_coverage=1 00:17:25.193 --rc genhtml_function_coverage=1 00:17:25.193 --rc genhtml_legend=1 00:17:25.193 --rc geninfo_all_blocks=1 00:17:25.193 --rc geninfo_unexecuted_blocks=1 00:17:25.193 00:17:25.193 ' 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=86400 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:25.193 21:59:09 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 86400 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86400 ']' 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:25.193 21:59:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:25.193 [2024-09-30 21:59:09.775868] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:25.193 [2024-09-30 21:59:09.776132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86400 ] 00:17:25.193 [2024-09-30 21:59:09.906050] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:25.193 [2024-09-30 21:59:09.923703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:25.193 [2024-09-30 21:59:09.958827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.193 [2024-09-30 21:59:09.958939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.193 [2024-09-30 21:59:09.958980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:26.128 21:59:10 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:26.128 21:59:10 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:26.128 21:59:10 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:26.128 21:59:10 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:26.128 21:59:10 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:26.128 21:59:10 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:26.128 21:59:10 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:26.128 21:59:10 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:26.386 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:26.386 { 00:17:26.386 "name": "nvme0n1", 00:17:26.386 "aliases": [ 00:17:26.386 "d310b41e-df3c-4224-a4a3-8de31274dfdd" 00:17:26.386 ], 00:17:26.386 "product_name": "NVMe disk", 00:17:26.386 "block_size": 4096, 00:17:26.386 "num_blocks": 1310720, 00:17:26.386 "uuid": "d310b41e-df3c-4224-a4a3-8de31274dfdd", 00:17:26.386 "numa_id": -1, 00:17:26.386 "assigned_rate_limits": { 00:17:26.386 "rw_ios_per_sec": 0, 00:17:26.386 "rw_mbytes_per_sec": 0, 00:17:26.386 "r_mbytes_per_sec": 0, 00:17:26.386 "w_mbytes_per_sec": 0 00:17:26.386 }, 00:17:26.386 "claimed": true, 00:17:26.386 "claim_type": "read_many_write_one", 00:17:26.386 "zoned": false, 00:17:26.386 "supported_io_types": { 00:17:26.386 "read": true, 00:17:26.386 "write": true, 00:17:26.386 "unmap": true, 00:17:26.386 "flush": true, 00:17:26.386 "reset": true, 00:17:26.386 "nvme_admin": true, 00:17:26.386 "nvme_io": true, 00:17:26.386 "nvme_io_md": false, 00:17:26.386 "write_zeroes": true, 00:17:26.386 "zcopy": false, 00:17:26.386 "get_zone_info": false, 00:17:26.386 "zone_management": false, 00:17:26.386 "zone_append": false, 00:17:26.386 "compare": true, 00:17:26.386 "compare_and_write": false, 00:17:26.386 "abort": true, 00:17:26.386 "seek_hole": false, 00:17:26.386 "seek_data": false, 00:17:26.386 "copy": true, 00:17:26.386 "nvme_iov_md": false 00:17:26.386 }, 00:17:26.386 "driver_specific": { 00:17:26.386 "nvme": [ 00:17:26.386 { 00:17:26.386 "pci_address": "0000:00:11.0", 00:17:26.386 "trid": { 00:17:26.386 "trtype": "PCIe", 00:17:26.386 "traddr": "0000:00:11.0" 00:17:26.386 }, 00:17:26.386 "ctrlr_data": { 00:17:26.386 "cntlid": 0, 00:17:26.386 "vendor_id": "0x1b36", 00:17:26.386 "model_number": "QEMU NVMe Ctrl", 00:17:26.386 "serial_number": "12341", 00:17:26.386 "firmware_revision": "8.0.0", 00:17:26.386 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:26.387 "oacs": { 00:17:26.387 "security": 0, 00:17:26.387 "format": 1, 00:17:26.387 "firmware": 0, 00:17:26.387 "ns_manage": 1 00:17:26.387 }, 00:17:26.387 "multi_ctrlr": false, 00:17:26.387 "ana_reporting": false 00:17:26.387 }, 00:17:26.387 "vs": { 00:17:26.387 "nvme_version": "1.4" 00:17:26.387 }, 00:17:26.387 "ns_data": { 00:17:26.387 "id": 1, 00:17:26.387 "can_share": false 00:17:26.387 } 00:17:26.387 } 00:17:26.387 ], 00:17:26.387 "mp_policy": "active_passive" 00:17:26.387 } 00:17:26.387 } 00:17:26.387 ]' 00:17:26.387 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:26.387 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:26.387 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:26.387 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:26.387 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:26.387 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:26.387 21:59:11 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:26.387 21:59:11 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:26.387 21:59:11 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:26.387 21:59:11 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:26.387 21:59:11 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:26.645 21:59:11 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b6bc14df-ab1d-439d-bcea-545f1affd892 00:17:26.645 21:59:11 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:26.645 21:59:11 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b6bc14df-ab1d-439d-bcea-545f1affd892 00:17:26.903 21:59:11 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:27.162 21:59:11 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=e50cf9f8-b914-42c3-a503-07b832f028b6 00:17:27.162 21:59:11 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e50cf9f8-b914-42c3-a503-07b832f028b6 00:17:27.421 21:59:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.421 21:59:11 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.421 21:59:11 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:27.421 21:59:11 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:27.421 21:59:11 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.421 21:59:11 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:27.421 21:59:11 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.421 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.421 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:27.421 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:27.421 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:27.421 21:59:11 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.421 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:27.421 { 00:17:27.421 "name": "874482fe-4e56-441a-b6c2-66da849c13cc", 00:17:27.421 "aliases": [ 00:17:27.421 "lvs/nvme0n1p0" 00:17:27.421 ], 00:17:27.421 "product_name": "Logical Volume", 00:17:27.421 "block_size": 4096, 00:17:27.421 "num_blocks": 26476544, 00:17:27.421 "uuid": "874482fe-4e56-441a-b6c2-66da849c13cc", 00:17:27.421 "assigned_rate_limits": { 00:17:27.421 "rw_ios_per_sec": 0, 00:17:27.421 "rw_mbytes_per_sec": 0, 00:17:27.421 "r_mbytes_per_sec": 0, 00:17:27.421 "w_mbytes_per_sec": 0 00:17:27.421 }, 00:17:27.421 "claimed": false, 00:17:27.421 "zoned": false, 00:17:27.421 "supported_io_types": { 00:17:27.421 "read": true, 00:17:27.421 "write": true, 00:17:27.421 "unmap": true, 00:17:27.421 "flush": false, 00:17:27.421 "reset": true, 00:17:27.421 "nvme_admin": false, 00:17:27.421 "nvme_io": false, 00:17:27.421 "nvme_io_md": false, 00:17:27.421 "write_zeroes": true, 00:17:27.421 "zcopy": false, 00:17:27.421 "get_zone_info": false, 00:17:27.421 "zone_management": false, 00:17:27.421 "zone_append": false, 00:17:27.421 "compare": false, 00:17:27.421 "compare_and_write": false, 00:17:27.421 "abort": false, 00:17:27.421 "seek_hole": true, 00:17:27.421 "seek_data": true, 00:17:27.421 "copy": false, 00:17:27.421 "nvme_iov_md": false 00:17:27.421 }, 00:17:27.421 "driver_specific": { 00:17:27.421 "lvol": { 00:17:27.421 "lvol_store_uuid": "e50cf9f8-b914-42c3-a503-07b832f028b6", 00:17:27.421 "base_bdev": "nvme0n1", 00:17:27.421 "thin_provision": true, 00:17:27.421 "num_allocated_clusters": 0, 00:17:27.421 "snapshot": false, 00:17:27.421 "clone": false, 00:17:27.421 "esnap_clone": false 00:17:27.421 } 00:17:27.421 } 00:17:27.421 } 00:17:27.421 ]' 00:17:27.421 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:27.421 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:27.421 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:27.680 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:27.680 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:27.680 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:27.680 21:59:12 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:27.680 21:59:12 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:27.680 21:59:12 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:27.938 21:59:12 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:27.938 21:59:12 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:27.938 21:59:12 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 874482fe-4e56-441a-b6c2-66da849c13cc 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:27.938 { 00:17:27.938 "name": "874482fe-4e56-441a-b6c2-66da849c13cc", 00:17:27.938 "aliases": [ 00:17:27.938 "lvs/nvme0n1p0" 00:17:27.938 ], 00:17:27.938 "product_name": "Logical Volume", 00:17:27.938 "block_size": 4096, 00:17:27.938 "num_blocks": 26476544, 00:17:27.938 "uuid": "874482fe-4e56-441a-b6c2-66da849c13cc", 00:17:27.938 "assigned_rate_limits": { 00:17:27.938 "rw_ios_per_sec": 0, 00:17:27.938 "rw_mbytes_per_sec": 0, 00:17:27.938 "r_mbytes_per_sec": 0, 00:17:27.938 "w_mbytes_per_sec": 0 00:17:27.938 }, 00:17:27.938 "claimed": false, 00:17:27.938 "zoned": false, 00:17:27.938 "supported_io_types": { 00:17:27.938 "read": true, 00:17:27.938 "write": true, 00:17:27.938 "unmap": true, 00:17:27.938 "flush": false, 00:17:27.938 "reset": true, 00:17:27.938 "nvme_admin": false, 00:17:27.938 "nvme_io": false, 00:17:27.938 "nvme_io_md": false, 00:17:27.938 "write_zeroes": true, 00:17:27.938 "zcopy": false, 00:17:27.938 "get_zone_info": false, 00:17:27.938 "zone_management": false, 00:17:27.938 "zone_append": false, 00:17:27.938 "compare": false, 00:17:27.938 "compare_and_write": false, 00:17:27.938 "abort": false, 00:17:27.938 "seek_hole": true, 00:17:27.938 "seek_data": true, 00:17:27.938 "copy": false, 00:17:27.938 "nvme_iov_md": false 00:17:27.938 }, 00:17:27.938 "driver_specific": { 00:17:27.938 "lvol": { 00:17:27.938 "lvol_store_uuid": "e50cf9f8-b914-42c3-a503-07b832f028b6", 00:17:27.938 "base_bdev": "nvme0n1", 00:17:27.938 "thin_provision": true, 00:17:27.938 "num_allocated_clusters": 0, 00:17:27.938 "snapshot": false, 00:17:27.938 "clone": false, 00:17:27.938 "esnap_clone": false 00:17:27.938 } 00:17:27.938 } 00:17:27.938 } 00:17:27.938 ]' 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:27.938 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:28.197 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:28.197 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:28.197 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:28.197 21:59:12 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:28.197 21:59:12 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:28.197 21:59:12 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:28.197 21:59:12 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:28.197 21:59:12 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 874482fe-4e56-441a-b6c2-66da849c13cc 00:17:28.197 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=874482fe-4e56-441a-b6c2-66da849c13cc 00:17:28.197 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:28.197 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:28.197 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:28.197 21:59:12 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 874482fe-4e56-441a-b6c2-66da849c13cc 00:17:28.455 21:59:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:28.455 { 00:17:28.455 "name": "874482fe-4e56-441a-b6c2-66da849c13cc", 00:17:28.455 "aliases": [ 00:17:28.455 "lvs/nvme0n1p0" 00:17:28.455 ], 00:17:28.455 "product_name": "Logical Volume", 00:17:28.455 "block_size": 4096, 00:17:28.455 "num_blocks": 26476544, 00:17:28.455 "uuid": "874482fe-4e56-441a-b6c2-66da849c13cc", 00:17:28.455 "assigned_rate_limits": { 00:17:28.455 "rw_ios_per_sec": 0, 00:17:28.455 "rw_mbytes_per_sec": 0, 00:17:28.455 "r_mbytes_per_sec": 0, 00:17:28.455 "w_mbytes_per_sec": 0 00:17:28.455 }, 00:17:28.455 "claimed": false, 00:17:28.455 "zoned": false, 00:17:28.455 "supported_io_types": { 00:17:28.455 "read": true, 00:17:28.455 "write": true, 00:17:28.455 "unmap": true, 00:17:28.455 "flush": false, 00:17:28.455 "reset": true, 00:17:28.455 "nvme_admin": false, 00:17:28.455 "nvme_io": false, 00:17:28.455 "nvme_io_md": false, 00:17:28.455 "write_zeroes": true, 00:17:28.455 "zcopy": false, 00:17:28.455 "get_zone_info": false, 00:17:28.455 "zone_management": false, 00:17:28.455 "zone_append": false, 00:17:28.455 "compare": false, 00:17:28.455 "compare_and_write": false, 00:17:28.455 "abort": false, 00:17:28.455 "seek_hole": true, 00:17:28.455 "seek_data": true, 00:17:28.455 "copy": false, 00:17:28.455 "nvme_iov_md": false 00:17:28.455 }, 00:17:28.455 "driver_specific": { 00:17:28.456 "lvol": { 00:17:28.456 "lvol_store_uuid": "e50cf9f8-b914-42c3-a503-07b832f028b6", 00:17:28.456 "base_bdev": "nvme0n1", 00:17:28.456 "thin_provision": true, 00:17:28.456 "num_allocated_clusters": 0, 00:17:28.456 "snapshot": false, 00:17:28.456 "clone": false, 00:17:28.456 "esnap_clone": false 00:17:28.456 } 00:17:28.456 } 00:17:28.456 } 00:17:28.456 ]' 00:17:28.456 21:59:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:28.456 21:59:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:28.456 21:59:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:28.456 21:59:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:28.456 21:59:13 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:28.456 21:59:13 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:28.456 21:59:13 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:28.456 21:59:13 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 874482fe-4e56-441a-b6c2-66da849c13cc -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:28.714 [2024-09-30 21:59:13.423425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.423463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.715 [2024-09-30 21:59:13.423476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.715 [2024-09-30 21:59:13.423483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.425390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.425411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.715 [2024-09-30 21:59:13.425421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.879 ms 00:17:28.715 [2024-09-30 21:59:13.425427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.425517] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.715 [2024-09-30 21:59:13.425704] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.715 [2024-09-30 21:59:13.425717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.425724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.715 [2024-09-30 21:59:13.425731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:28.715 [2024-09-30 21:59:13.425736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.425863] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 33834573-af61-4e72-baa0-f895c2fd95b0 00:17:28.715 [2024-09-30 21:59:13.426869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.426890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:28.715 [2024-09-30 21:59:13.426898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:28.715 [2024-09-30 21:59:13.426905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.432093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.432117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.715 [2024-09-30 21:59:13.432124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.113 ms 00:17:28.715 [2024-09-30 21:59:13.432133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.432247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.432258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.715 [2024-09-30 21:59:13.432272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:28.715 [2024-09-30 21:59:13.432280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.432312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.432334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.715 [2024-09-30 21:59:13.432340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:28.715 [2024-09-30 21:59:13.432348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.432390] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:28.715 [2024-09-30 21:59:13.433698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.433719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.715 [2024-09-30 21:59:13.433728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:17:28.715 [2024-09-30 21:59:13.433733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.433768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.433775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.715 [2024-09-30 21:59:13.433784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:28.715 [2024-09-30 21:59:13.433790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.433813] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:28.715 [2024-09-30 21:59:13.433934] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:28.715 [2024-09-30 21:59:13.433944] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.715 [2024-09-30 21:59:13.433952] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:28.715 [2024-09-30 21:59:13.433969] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.715 [2024-09-30 21:59:13.433975] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.715 [2024-09-30 21:59:13.433984] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:28.715 [2024-09-30 21:59:13.433996] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.715 [2024-09-30 21:59:13.434003] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:28.715 [2024-09-30 21:59:13.434008] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:28.715 [2024-09-30 21:59:13.434024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.434029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.715 [2024-09-30 21:59:13.434037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:28.715 [2024-09-30 21:59:13.434043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.434125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.715 [2024-09-30 21:59:13.434131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.715 [2024-09-30 21:59:13.434138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:28.715 [2024-09-30 21:59:13.434143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.715 [2024-09-30 21:59:13.434259] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.715 [2024-09-30 21:59:13.434267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.715 [2024-09-30 21:59:13.434275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.715 [2024-09-30 21:59:13.434294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.715 [2024-09-30 21:59:13.434314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.715 [2024-09-30 21:59:13.434325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.715 [2024-09-30 21:59:13.434331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:28.715 [2024-09-30 21:59:13.434338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.715 [2024-09-30 21:59:13.434344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.715 [2024-09-30 21:59:13.434351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:28.715 [2024-09-30 21:59:13.434357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.715 [2024-09-30 21:59:13.434369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.715 [2024-09-30 21:59:13.434389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.715 [2024-09-30 21:59:13.434408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.715 [2024-09-30 21:59:13.434427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.715 [2024-09-30 21:59:13.434447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.715 [2024-09-30 21:59:13.434466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.715 [2024-09-30 21:59:13.434480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.715 [2024-09-30 21:59:13.434486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:28.715 [2024-09-30 21:59:13.434493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.715 [2024-09-30 21:59:13.434499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:28.715 [2024-09-30 21:59:13.434506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:28.715 [2024-09-30 21:59:13.434512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:28.715 [2024-09-30 21:59:13.434525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:28.715 [2024-09-30 21:59:13.434532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434537] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.715 [2024-09-30 21:59:13.434561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.715 [2024-09-30 21:59:13.434575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.715 [2024-09-30 21:59:13.434588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.715 [2024-09-30 21:59:13.434595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.715 [2024-09-30 21:59:13.434601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.715 [2024-09-30 21:59:13.434608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.715 [2024-09-30 21:59:13.434614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.715 [2024-09-30 21:59:13.434621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.715 [2024-09-30 21:59:13.434629] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.715 [2024-09-30 21:59:13.434640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.715 [2024-09-30 21:59:13.434647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:28.715 [2024-09-30 21:59:13.434654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:28.715 [2024-09-30 21:59:13.434660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:28.715 [2024-09-30 21:59:13.434668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:28.715 [2024-09-30 21:59:13.434674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:28.715 [2024-09-30 21:59:13.434688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:28.715 [2024-09-30 21:59:13.434695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:28.715 [2024-09-30 21:59:13.434702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:28.715 [2024-09-30 21:59:13.434708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:28.715 [2024-09-30 21:59:13.434715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:28.715 [2024-09-30 21:59:13.434721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:28.715 [2024-09-30 21:59:13.434729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:28.715 [2024-09-30 21:59:13.434735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:28.715 [2024-09-30 21:59:13.434742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:28.716 [2024-09-30 21:59:13.434748] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.716 [2024-09-30 21:59:13.434756] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.716 [2024-09-30 21:59:13.434766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.716 [2024-09-30 21:59:13.434773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.716 [2024-09-30 21:59:13.434780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.716 [2024-09-30 21:59:13.434788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.716 [2024-09-30 21:59:13.434794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.716 [2024-09-30 21:59:13.434813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.716 [2024-09-30 21:59:13.434819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:17:28.716 [2024-09-30 21:59:13.434827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.716 [2024-09-30 21:59:13.434906] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:28.716 [2024-09-30 21:59:13.434922] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:31.246 [2024-09-30 21:59:15.559207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.559267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:31.246 [2024-09-30 21:59:15.559282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2124.293 ms 00:17:31.246 [2024-09-30 21:59:15.559291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.576011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.576060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.246 [2024-09-30 21:59:15.576072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.599 ms 00:17:31.246 [2024-09-30 21:59:15.576085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.576252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.576266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.246 [2024-09-30 21:59:15.576287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:31.246 [2024-09-30 21:59:15.576296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.585938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.585979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.246 [2024-09-30 21:59:15.585991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.591 ms 00:17:31.246 [2024-09-30 21:59:15.586003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.586066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.586080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.246 [2024-09-30 21:59:15.586104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.246 [2024-09-30 21:59:15.586125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.586548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.586580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.246 [2024-09-30 21:59:15.586593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:17:31.246 [2024-09-30 21:59:15.586608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.586773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.586787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.246 [2024-09-30 21:59:15.586798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:17:31.246 [2024-09-30 21:59:15.586810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.593113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.593146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.246 [2024-09-30 21:59:15.593155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.263 ms 00:17:31.246 [2024-09-30 21:59:15.593164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.601599] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:31.246 [2024-09-30 21:59:15.616325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.616354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.246 [2024-09-30 21:59:15.616367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.025 ms 00:17:31.246 [2024-09-30 21:59:15.616374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.664617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.664651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:31.246 [2024-09-30 21:59:15.664667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.177 ms 00:17:31.246 [2024-09-30 21:59:15.664676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.664860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.664873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.246 [2024-09-30 21:59:15.664883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:17:31.246 [2024-09-30 21:59:15.664891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.667808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.667836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:31.246 [2024-09-30 21:59:15.667848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.879 ms 00:17:31.246 [2024-09-30 21:59:15.667856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.670179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.670217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:31.246 [2024-09-30 21:59:15.670228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.290 ms 00:17:31.246 [2024-09-30 21:59:15.670236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.670532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.670548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.246 [2024-09-30 21:59:15.670559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:31.246 [2024-09-30 21:59:15.670567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.696182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.696231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:31.246 [2024-09-30 21:59:15.696247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.581 ms 00:17:31.246 [2024-09-30 21:59:15.696255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.700064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.700093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:31.246 [2024-09-30 21:59:15.700108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.703 ms 00:17:31.246 [2024-09-30 21:59:15.700135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.702890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.702917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:31.246 [2024-09-30 21:59:15.702927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.701 ms 00:17:31.246 [2024-09-30 21:59:15.702946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.706179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.706218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.246 [2024-09-30 21:59:15.706232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:17:31.246 [2024-09-30 21:59:15.706240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.706290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.706300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.246 [2024-09-30 21:59:15.706313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.246 [2024-09-30 21:59:15.706331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.706420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.246 [2024-09-30 21:59:15.706431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.246 [2024-09-30 21:59:15.706442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:31.246 [2024-09-30 21:59:15.706450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.246 [2024-09-30 21:59:15.707272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.246 [2024-09-30 21:59:15.708282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2283.547 ms, result 0 00:17:31.246 [2024-09-30 21:59:15.709135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.246 { 00:17:31.246 "name": "ftl0", 00:17:31.246 "uuid": "33834573-af61-4e72-baa0-f895c2fd95b0" 00:17:31.246 } 00:17:31.246 21:59:15 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:31.246 21:59:15 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:31.246 21:59:15 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:31.246 21:59:15 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:31.246 21:59:15 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:31.246 21:59:15 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:31.246 21:59:15 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:31.246 21:59:15 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:31.505 [ 00:17:31.505 { 00:17:31.505 "name": "ftl0", 00:17:31.505 "aliases": [ 00:17:31.505 "33834573-af61-4e72-baa0-f895c2fd95b0" 00:17:31.505 ], 00:17:31.505 "product_name": "FTL disk", 00:17:31.505 "block_size": 4096, 00:17:31.505 "num_blocks": 23592960, 00:17:31.505 "uuid": "33834573-af61-4e72-baa0-f895c2fd95b0", 00:17:31.505 "assigned_rate_limits": { 00:17:31.505 "rw_ios_per_sec": 0, 00:17:31.505 "rw_mbytes_per_sec": 0, 00:17:31.505 "r_mbytes_per_sec": 0, 00:17:31.505 "w_mbytes_per_sec": 0 00:17:31.505 }, 00:17:31.505 "claimed": false, 00:17:31.505 "zoned": false, 00:17:31.505 "supported_io_types": { 00:17:31.505 "read": true, 00:17:31.505 "write": true, 00:17:31.505 "unmap": true, 00:17:31.505 "flush": true, 00:17:31.505 "reset": false, 00:17:31.505 "nvme_admin": false, 00:17:31.505 "nvme_io": false, 00:17:31.505 "nvme_io_md": false, 00:17:31.505 "write_zeroes": true, 00:17:31.505 "zcopy": false, 00:17:31.505 "get_zone_info": false, 00:17:31.505 "zone_management": false, 00:17:31.505 "zone_append": false, 00:17:31.505 "compare": false, 00:17:31.505 "compare_and_write": false, 00:17:31.505 "abort": false, 00:17:31.505 "seek_hole": false, 00:17:31.505 "seek_data": false, 00:17:31.505 "copy": false, 00:17:31.505 "nvme_iov_md": false 00:17:31.505 }, 00:17:31.505 "driver_specific": { 00:17:31.505 "ftl": { 00:17:31.505 "base_bdev": "874482fe-4e56-441a-b6c2-66da849c13cc", 00:17:31.505 "cache": "nvc0n1p0" 00:17:31.505 } 00:17:31.505 } 00:17:31.505 } 00:17:31.505 ] 00:17:31.505 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:31.505 21:59:16 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:31.505 21:59:16 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:31.764 21:59:16 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:31.764 21:59:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:31.764 21:59:16 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:31.764 { 00:17:31.764 "name": "ftl0", 00:17:31.764 "aliases": [ 00:17:31.764 "33834573-af61-4e72-baa0-f895c2fd95b0" 00:17:31.764 ], 00:17:31.764 "product_name": "FTL disk", 00:17:31.764 "block_size": 4096, 00:17:31.764 "num_blocks": 23592960, 00:17:31.764 "uuid": "33834573-af61-4e72-baa0-f895c2fd95b0", 00:17:31.764 "assigned_rate_limits": { 00:17:31.764 "rw_ios_per_sec": 0, 00:17:31.764 "rw_mbytes_per_sec": 0, 00:17:31.764 "r_mbytes_per_sec": 0, 00:17:31.764 "w_mbytes_per_sec": 0 00:17:31.764 }, 00:17:31.764 "claimed": false, 00:17:31.764 "zoned": false, 00:17:31.764 "supported_io_types": { 00:17:31.764 "read": true, 00:17:31.764 "write": true, 00:17:31.764 "unmap": true, 00:17:31.764 "flush": true, 00:17:31.764 "reset": false, 00:17:31.764 "nvme_admin": false, 00:17:31.764 "nvme_io": false, 00:17:31.764 "nvme_io_md": false, 00:17:31.764 "write_zeroes": true, 00:17:31.764 "zcopy": false, 00:17:31.764 "get_zone_info": false, 00:17:31.764 "zone_management": false, 00:17:31.764 "zone_append": false, 00:17:31.764 "compare": false, 00:17:31.764 "compare_and_write": false, 00:17:31.764 "abort": false, 00:17:31.764 "seek_hole": false, 00:17:31.764 "seek_data": false, 00:17:31.764 "copy": false, 00:17:31.764 "nvme_iov_md": false 00:17:31.764 }, 00:17:31.764 "driver_specific": { 00:17:31.764 "ftl": { 00:17:31.764 "base_bdev": "874482fe-4e56-441a-b6c2-66da849c13cc", 00:17:31.764 "cache": "nvc0n1p0" 00:17:31.764 } 00:17:31.764 } 00:17:31.764 } 00:17:31.764 ]' 00:17:31.764 21:59:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:31.764 21:59:16 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:31.764 21:59:16 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:32.024 [2024-09-30 21:59:16.737700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.024 [2024-09-30 21:59:16.737742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:32.024 [2024-09-30 21:59:16.737755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.024 [2024-09-30 21:59:16.737766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.024 [2024-09-30 21:59:16.737803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:32.024 [2024-09-30 21:59:16.738271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.024 [2024-09-30 21:59:16.738288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:32.024 [2024-09-30 21:59:16.738299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:17:32.024 [2024-09-30 21:59:16.738306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.738880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.738910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:32.025 [2024-09-30 21:59:16.738923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.542 ms 00:17:32.025 [2024-09-30 21:59:16.738930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.742620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.742640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:32.025 [2024-09-30 21:59:16.742652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.655 ms 00:17:32.025 [2024-09-30 21:59:16.742660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.749718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.749754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:32.025 [2024-09-30 21:59:16.749767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.000 ms 00:17:32.025 [2024-09-30 21:59:16.749776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.751365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.751394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:32.025 [2024-09-30 21:59:16.751404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.500 ms 00:17:32.025 [2024-09-30 21:59:16.751411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.755307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.755333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.025 [2024-09-30 21:59:16.755344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.846 ms 00:17:32.025 [2024-09-30 21:59:16.755363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.755569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.755577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.025 [2024-09-30 21:59:16.755589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:17:32.025 [2024-09-30 21:59:16.755596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.757147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.757174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.025 [2024-09-30 21:59:16.757199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.506 ms 00:17:32.025 [2024-09-30 21:59:16.757206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.758511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.758535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.025 [2024-09-30 21:59:16.758545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.258 ms 00:17:32.025 [2024-09-30 21:59:16.758552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.759446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.759475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.025 [2024-09-30 21:59:16.759485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:17:32.025 [2024-09-30 21:59:16.759492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.760446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.025 [2024-09-30 21:59:16.760472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.025 [2024-09-30 21:59:16.760482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:17:32.025 [2024-09-30 21:59:16.760489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.025 [2024-09-30 21:59:16.760535] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.025 [2024-09-30 21:59:16.760548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.760991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.761000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.761007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.761015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.025 [2024-09-30 21:59:16.761022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.026 [2024-09-30 21:59:16.761406] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.026 [2024-09-30 21:59:16.761415] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33834573-af61-4e72-baa0-f895c2fd95b0 00:17:32.026 [2024-09-30 21:59:16.761423] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.026 [2024-09-30 21:59:16.761432] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.026 [2024-09-30 21:59:16.761438] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.026 [2024-09-30 21:59:16.761449] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.026 [2024-09-30 21:59:16.761457] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.026 [2024-09-30 21:59:16.761467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.026 [2024-09-30 21:59:16.761474] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.026 [2024-09-30 21:59:16.761481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.026 [2024-09-30 21:59:16.761487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.026 [2024-09-30 21:59:16.761496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.026 [2024-09-30 21:59:16.761503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.026 [2024-09-30 21:59:16.761514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:17:32.026 [2024-09-30 21:59:16.761520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.763076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.026 [2024-09-30 21:59:16.763095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.026 [2024-09-30 21:59:16.763108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.525 ms 00:17:32.026 [2024-09-30 21:59:16.763115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.763244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.026 [2024-09-30 21:59:16.763254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.026 [2024-09-30 21:59:16.763264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:32.026 [2024-09-30 21:59:16.763271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.768724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.768753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.026 [2024-09-30 21:59:16.768767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.768786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.768867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.768876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.026 [2024-09-30 21:59:16.768888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.768895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.768951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.768959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.026 [2024-09-30 21:59:16.768968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.768977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.769006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.769013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.026 [2024-09-30 21:59:16.769022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.769029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.778636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.778685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.026 [2024-09-30 21:59:16.778699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.778715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.786766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.786801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.026 [2024-09-30 21:59:16.786814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.786822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.786875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.786884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.026 [2024-09-30 21:59:16.786912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.786919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.786985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.786994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.026 [2024-09-30 21:59:16.787003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.787010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.026 [2024-09-30 21:59:16.787095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.026 [2024-09-30 21:59:16.787104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.026 [2024-09-30 21:59:16.787114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.026 [2024-09-30 21:59:16.787121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.027 [2024-09-30 21:59:16.787205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.027 [2024-09-30 21:59:16.787214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.027 [2024-09-30 21:59:16.787245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.027 [2024-09-30 21:59:16.787252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.027 [2024-09-30 21:59:16.787300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.027 [2024-09-30 21:59:16.787308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.027 [2024-09-30 21:59:16.787318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.027 [2024-09-30 21:59:16.787325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.027 [2024-09-30 21:59:16.787389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.027 [2024-09-30 21:59:16.787412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.027 [2024-09-30 21:59:16.787423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.027 [2024-09-30 21:59:16.787431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.027 [2024-09-30 21:59:16.787622] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.906 ms, result 0 00:17:32.027 true 00:17:32.027 21:59:16 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 86400 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86400 ']' 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86400 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86400 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:32.027 killing process with pid 86400 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86400' 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86400 00:17:32.027 21:59:16 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86400 00:17:37.291 21:59:21 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:37.857 65536+0 records in 00:17:37.857 65536+0 records out 00:17:37.857 268435456 bytes (268 MB, 256 MiB) copied, 1.07224 s, 250 MB/s 00:17:37.857 21:59:22 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.857 [2024-09-30 21:59:22.656834] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:37.857 [2024-09-30 21:59:22.656951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86561 ] 00:17:38.115 [2024-09-30 21:59:22.783714] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:38.115 [2024-09-30 21:59:22.800579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.115 [2024-09-30 21:59:22.834563] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.115 [2024-09-30 21:59:22.922977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:38.115 [2024-09-30 21:59:22.923048] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:38.375 [2024-09-30 21:59:23.075680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.075731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:38.375 [2024-09-30 21:59:23.075743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:38.375 [2024-09-30 21:59:23.075750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.077962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.078003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.375 [2024-09-30 21:59:23.078012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:17:38.375 [2024-09-30 21:59:23.078020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.078090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:38.375 [2024-09-30 21:59:23.078332] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:38.375 [2024-09-30 21:59:23.078356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.078363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.375 [2024-09-30 21:59:23.078371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:17:38.375 [2024-09-30 21:59:23.078379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.079828] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:38.375 [2024-09-30 21:59:23.082024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.082066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:38.375 [2024-09-30 21:59:23.082077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.198 ms 00:17:38.375 [2024-09-30 21:59:23.082087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.082143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.082153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:38.375 [2024-09-30 21:59:23.082161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:38.375 [2024-09-30 21:59:23.082168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.087251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.087287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.375 [2024-09-30 21:59:23.087297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.033 ms 00:17:38.375 [2024-09-30 21:59:23.087304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.087407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.087420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.375 [2024-09-30 21:59:23.087428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:38.375 [2024-09-30 21:59:23.087437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.087462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.087472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:38.375 [2024-09-30 21:59:23.087480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:38.375 [2024-09-30 21:59:23.087487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.087507] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:38.375 [2024-09-30 21:59:23.088844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.088877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.375 [2024-09-30 21:59:23.088888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.343 ms 00:17:38.375 [2024-09-30 21:59:23.088895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.088938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.375 [2024-09-30 21:59:23.088948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:38.375 [2024-09-30 21:59:23.088959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:38.375 [2024-09-30 21:59:23.088966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.375 [2024-09-30 21:59:23.088983] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:38.375 [2024-09-30 21:59:23.088997] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:38.375 [2024-09-30 21:59:23.089030] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:38.375 [2024-09-30 21:59:23.089049] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:38.375 [2024-09-30 21:59:23.089149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:38.375 [2024-09-30 21:59:23.089160] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:38.375 [2024-09-30 21:59:23.089170] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:38.375 [2024-09-30 21:59:23.089182] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089216] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:38.376 [2024-09-30 21:59:23.089223] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:38.376 [2024-09-30 21:59:23.089230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:38.376 [2024-09-30 21:59:23.089236] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:38.376 [2024-09-30 21:59:23.089245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.376 [2024-09-30 21:59:23.089254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:38.376 [2024-09-30 21:59:23.089261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:38.376 [2024-09-30 21:59:23.089268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.376 [2024-09-30 21:59:23.089354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.376 [2024-09-30 21:59:23.089362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:38.376 [2024-09-30 21:59:23.089369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:38.376 [2024-09-30 21:59:23.089378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.376 [2024-09-30 21:59:23.089474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:38.376 [2024-09-30 21:59:23.089483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:38.376 [2024-09-30 21:59:23.089493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:38.376 [2024-09-30 21:59:23.089519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:38.376 [2024-09-30 21:59:23.089551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:38.376 [2024-09-30 21:59:23.089566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:38.376 [2024-09-30 21:59:23.089573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:38.376 [2024-09-30 21:59:23.089581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:38.376 [2024-09-30 21:59:23.089588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:38.376 [2024-09-30 21:59:23.089596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:38.376 [2024-09-30 21:59:23.089604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:38.376 [2024-09-30 21:59:23.089619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:38.376 [2024-09-30 21:59:23.089641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:38.376 [2024-09-30 21:59:23.089663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:38.376 [2024-09-30 21:59:23.089689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:38.376 [2024-09-30 21:59:23.089712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:38.376 [2024-09-30 21:59:23.089734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:38.376 [2024-09-30 21:59:23.089748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:38.376 [2024-09-30 21:59:23.089756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:38.376 [2024-09-30 21:59:23.089764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:38.376 [2024-09-30 21:59:23.089771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:38.376 [2024-09-30 21:59:23.089779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:38.376 [2024-09-30 21:59:23.089786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:38.376 [2024-09-30 21:59:23.089803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:38.376 [2024-09-30 21:59:23.089810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089818] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:38.376 [2024-09-30 21:59:23.089826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:38.376 [2024-09-30 21:59:23.089834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.376 [2024-09-30 21:59:23.089850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:38.376 [2024-09-30 21:59:23.089858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:38.376 [2024-09-30 21:59:23.089865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:38.376 [2024-09-30 21:59:23.089872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:38.376 [2024-09-30 21:59:23.089880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:38.376 [2024-09-30 21:59:23.089887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:38.376 [2024-09-30 21:59:23.089895] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:38.376 [2024-09-30 21:59:23.089907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.376 [2024-09-30 21:59:23.089915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:38.376 [2024-09-30 21:59:23.089923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:38.376 [2024-09-30 21:59:23.089931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:38.376 [2024-09-30 21:59:23.089937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:38.376 [2024-09-30 21:59:23.089944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:38.376 [2024-09-30 21:59:23.089951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:38.376 [2024-09-30 21:59:23.089958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:38.376 [2024-09-30 21:59:23.089964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:38.376 [2024-09-30 21:59:23.089971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:38.376 [2024-09-30 21:59:23.089977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:38.376 [2024-09-30 21:59:23.089984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:38.376 [2024-09-30 21:59:23.089991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:38.376 [2024-09-30 21:59:23.089998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:38.376 [2024-09-30 21:59:23.090006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:38.376 [2024-09-30 21:59:23.090013] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:38.376 [2024-09-30 21:59:23.090024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.376 [2024-09-30 21:59:23.090031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:38.376 [2024-09-30 21:59:23.090040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:38.376 [2024-09-30 21:59:23.090047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:38.376 [2024-09-30 21:59:23.090054] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:38.376 [2024-09-30 21:59:23.090062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.376 [2024-09-30 21:59:23.090071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:38.376 [2024-09-30 21:59:23.090077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:17:38.376 [2024-09-30 21:59:23.090084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.376 [2024-09-30 21:59:23.108334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.376 [2024-09-30 21:59:23.108375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.376 [2024-09-30 21:59:23.108387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.201 ms 00:17:38.376 [2024-09-30 21:59:23.108402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.376 [2024-09-30 21:59:23.108566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.376 [2024-09-30 21:59:23.108584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:38.376 [2024-09-30 21:59:23.108602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:38.376 [2024-09-30 21:59:23.108614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.376 [2024-09-30 21:59:23.118867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.376 [2024-09-30 21:59:23.118915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.377 [2024-09-30 21:59:23.118931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.222 ms 00:17:38.377 [2024-09-30 21:59:23.118943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.119028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.119046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.377 [2024-09-30 21:59:23.119059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:38.377 [2024-09-30 21:59:23.119071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.119457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.119494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.377 [2024-09-30 21:59:23.119517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:17:38.377 [2024-09-30 21:59:23.119536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.119729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.119758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.377 [2024-09-30 21:59:23.119776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:17:38.377 [2024-09-30 21:59:23.119790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.125004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.125039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.377 [2024-09-30 21:59:23.125054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.184 ms 00:17:38.377 [2024-09-30 21:59:23.125064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.127306] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:38.377 [2024-09-30 21:59:23.127344] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:38.377 [2024-09-30 21:59:23.127355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.127362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:38.377 [2024-09-30 21:59:23.127370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:17:38.377 [2024-09-30 21:59:23.127378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.141759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.141794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:38.377 [2024-09-30 21:59:23.141806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.337 ms 00:17:38.377 [2024-09-30 21:59:23.141814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.143327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.143359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:38.377 [2024-09-30 21:59:23.143369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:17:38.377 [2024-09-30 21:59:23.143376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.144602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.144634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:38.377 [2024-09-30 21:59:23.144649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.186 ms 00:17:38.377 [2024-09-30 21:59:23.144656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.144971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.144994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:38.377 [2024-09-30 21:59:23.145009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:38.377 [2024-09-30 21:59:23.145022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.159882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.159928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:38.377 [2024-09-30 21:59:23.159939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.837 ms 00:17:38.377 [2024-09-30 21:59:23.159948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.167436] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:38.377 [2024-09-30 21:59:23.182104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.182141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:38.377 [2024-09-30 21:59:23.182152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.089 ms 00:17:38.377 [2024-09-30 21:59:23.182166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.182266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.182277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:38.377 [2024-09-30 21:59:23.182286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:38.377 [2024-09-30 21:59:23.182297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.182345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.182354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:38.377 [2024-09-30 21:59:23.182362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:17:38.377 [2024-09-30 21:59:23.182369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.182391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.182400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:38.377 [2024-09-30 21:59:23.182407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:38.377 [2024-09-30 21:59:23.182415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.182446] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:38.377 [2024-09-30 21:59:23.182455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.182467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:38.377 [2024-09-30 21:59:23.182479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:38.377 [2024-09-30 21:59:23.182487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.185915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.185957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:38.377 [2024-09-30 21:59:23.185967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.410 ms 00:17:38.377 [2024-09-30 21:59:23.185975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.377 [2024-09-30 21:59:23.186047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.377 [2024-09-30 21:59:23.186059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:38.377 [2024-09-30 21:59:23.186069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:38.377 [2024-09-30 21:59:23.186077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.636 [2024-09-30 21:59:23.186951] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.636 [2024-09-30 21:59:23.187955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.018 ms, result 0 00:17:38.636 [2024-09-30 21:59:23.188621] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:38.636 [2024-09-30 21:59:23.198664] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.323  Copying: 42/256 [MB] (42 MBps) Copying: 88/256 [MB] (45 MBps) Copying: 132/256 [MB] (43 MBps) Copying: 177/256 [MB] (45 MBps) Copying: 220/256 [MB] (42 MBps) Copying: 256/256 [MB] (average 43 MBps)[2024-09-30 21:59:29.043908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:44.323 [2024-09-30 21:59:29.045014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.045052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:44.323 [2024-09-30 21:59:29.045064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:44.323 [2024-09-30 21:59:29.045075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.045095] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:44.323 [2024-09-30 21:59:29.045532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.045562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:44.323 [2024-09-30 21:59:29.045571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:44.323 [2024-09-30 21:59:29.045578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.047035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.047069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:44.323 [2024-09-30 21:59:29.047079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:17:44.323 [2024-09-30 21:59:29.047086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.053111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.053146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:44.323 [2024-09-30 21:59:29.053155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.008 ms 00:17:44.323 [2024-09-30 21:59:29.053162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.060063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.060093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:44.323 [2024-09-30 21:59:29.060104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:17:44.323 [2024-09-30 21:59:29.060117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.061286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.061318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:44.323 [2024-09-30 21:59:29.061327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.130 ms 00:17:44.323 [2024-09-30 21:59:29.061334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.064626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.064664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:44.323 [2024-09-30 21:59:29.064673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.264 ms 00:17:44.323 [2024-09-30 21:59:29.064680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.064794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.064820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:44.323 [2024-09-30 21:59:29.064829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:44.323 [2024-09-30 21:59:29.064837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.066685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.066729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:44.323 [2024-09-30 21:59:29.066740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.830 ms 00:17:44.323 [2024-09-30 21:59:29.066748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.068229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.068260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:44.323 [2024-09-30 21:59:29.068271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:17:44.323 [2024-09-30 21:59:29.068279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.069225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.069254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:44.323 [2024-09-30 21:59:29.069264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:17:44.323 [2024-09-30 21:59:29.069270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.070308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.323 [2024-09-30 21:59:29.070338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:44.323 [2024-09-30 21:59:29.070346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:17:44.323 [2024-09-30 21:59:29.070353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.323 [2024-09-30 21:59:29.070380] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:44.323 [2024-09-30 21:59:29.070405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:44.323 [2024-09-30 21:59:29.070763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.070993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:44.324 [2024-09-30 21:59:29.071155] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:44.324 [2024-09-30 21:59:29.071163] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33834573-af61-4e72-baa0-f895c2fd95b0 00:17:44.324 [2024-09-30 21:59:29.071171] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:44.324 [2024-09-30 21:59:29.071179] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:44.324 [2024-09-30 21:59:29.071202] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:44.324 [2024-09-30 21:59:29.071214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:44.324 [2024-09-30 21:59:29.071220] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:44.324 [2024-09-30 21:59:29.071229] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:44.324 [2024-09-30 21:59:29.071242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:44.324 [2024-09-30 21:59:29.071249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:44.324 [2024-09-30 21:59:29.071255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:44.324 [2024-09-30 21:59:29.071262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.324 [2024-09-30 21:59:29.071270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:44.324 [2024-09-30 21:59:29.071280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:17:44.324 [2024-09-30 21:59:29.071287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.072730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.324 [2024-09-30 21:59:29.072755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:44.324 [2024-09-30 21:59:29.072764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:17:44.324 [2024-09-30 21:59:29.072771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.072847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.324 [2024-09-30 21:59:29.072860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:44.324 [2024-09-30 21:59:29.072868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:44.324 [2024-09-30 21:59:29.072875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.077667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.077699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.324 [2024-09-30 21:59:29.077708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.077715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.077782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.077798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.324 [2024-09-30 21:59:29.077806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.077813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.077851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.077861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.324 [2024-09-30 21:59:29.077870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.077880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.077896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.077908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.324 [2024-09-30 21:59:29.077917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.077926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.086545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.086581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.324 [2024-09-30 21:59:29.086590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.086599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.093655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.093697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.324 [2024-09-30 21:59:29.093707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.093714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.093758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.093768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.324 [2024-09-30 21:59:29.093776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.093784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.093811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.093821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.324 [2024-09-30 21:59:29.093828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.093838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.093898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.093908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.324 [2024-09-30 21:59:29.093915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.093924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.093951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.093960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:44.324 [2024-09-30 21:59:29.093968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.093975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.094013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.094027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.324 [2024-09-30 21:59:29.094035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.094042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.094084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:44.324 [2024-09-30 21:59:29.094094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.324 [2024-09-30 21:59:29.094103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:44.324 [2024-09-30 21:59:29.094112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.324 [2024-09-30 21:59:29.094247] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.218 ms, result 0 00:17:44.890 00:17:44.890 00:17:44.890 21:59:29 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=86639 00:17:44.890 21:59:29 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 86639 00:17:44.890 21:59:29 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:44.890 21:59:29 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86639 ']' 00:17:44.890 21:59:29 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.890 21:59:29 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.890 21:59:29 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.890 21:59:29 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.890 21:59:29 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:44.890 [2024-09-30 21:59:29.598098] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:44.890 [2024-09-30 21:59:29.598236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86639 ] 00:17:45.149 [2024-09-30 21:59:29.726286] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:45.149 [2024-09-30 21:59:29.748343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.149 [2024-09-30 21:59:29.782232] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.716 21:59:30 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:45.716 21:59:30 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:45.716 21:59:30 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:45.974 [2024-09-30 21:59:30.636788] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:45.974 [2024-09-30 21:59:30.636847] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:45.974 [2024-09-30 21:59:30.781709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.974 [2024-09-30 21:59:30.781754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:45.974 [2024-09-30 21:59:30.781769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:45.974 [2024-09-30 21:59:30.781777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.975 [2024-09-30 21:59:30.784004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.975 [2024-09-30 21:59:30.784042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.975 [2024-09-30 21:59:30.784053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:17:45.975 [2024-09-30 21:59:30.784061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.975 [2024-09-30 21:59:30.784273] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:45.975 [2024-09-30 21:59:30.784554] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:45.975 [2024-09-30 21:59:30.784584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.975 [2024-09-30 21:59:30.784592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.975 [2024-09-30 21:59:30.784602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:17:45.975 [2024-09-30 21:59:30.784610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.975 [2024-09-30 21:59:30.785670] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:46.235 [2024-09-30 21:59:30.787834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.787871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:46.235 [2024-09-30 21:59:30.787881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.168 ms 00:17:46.235 [2024-09-30 21:59:30.787890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.787942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.787959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:46.235 [2024-09-30 21:59:30.787967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:46.235 [2024-09-30 21:59:30.787977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.792931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.792964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.235 [2024-09-30 21:59:30.792974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.904 ms 00:17:46.235 [2024-09-30 21:59:30.792983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.793074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.793086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.235 [2024-09-30 21:59:30.793096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:46.235 [2024-09-30 21:59:30.793105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.793132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.793145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:46.235 [2024-09-30 21:59:30.793156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:46.235 [2024-09-30 21:59:30.793164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.793200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:46.235 [2024-09-30 21:59:30.794529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.794555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.235 [2024-09-30 21:59:30.794566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.341 ms 00:17:46.235 [2024-09-30 21:59:30.794579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.794627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.794637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:46.235 [2024-09-30 21:59:30.794650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:46.235 [2024-09-30 21:59:30.794656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.794678] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:46.235 [2024-09-30 21:59:30.794694] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:46.235 [2024-09-30 21:59:30.794734] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:46.235 [2024-09-30 21:59:30.794751] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:46.235 [2024-09-30 21:59:30.794854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:46.235 [2024-09-30 21:59:30.794865] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:46.235 [2024-09-30 21:59:30.794880] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:46.235 [2024-09-30 21:59:30.794890] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:46.235 [2024-09-30 21:59:30.794902] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:46.235 [2024-09-30 21:59:30.794910] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:46.235 [2024-09-30 21:59:30.794919] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:46.235 [2024-09-30 21:59:30.794926] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:46.235 [2024-09-30 21:59:30.794935] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:46.235 [2024-09-30 21:59:30.794943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.794956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:46.235 [2024-09-30 21:59:30.794963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:17:46.235 [2024-09-30 21:59:30.794972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.795057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.235 [2024-09-30 21:59:30.795071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:46.235 [2024-09-30 21:59:30.795079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:46.235 [2024-09-30 21:59:30.795087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.235 [2024-09-30 21:59:30.795204] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:46.235 [2024-09-30 21:59:30.795220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:46.235 [2024-09-30 21:59:30.795229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.235 [2024-09-30 21:59:30.795243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:46.235 [2024-09-30 21:59:30.795264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:46.235 [2024-09-30 21:59:30.795282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:46.235 [2024-09-30 21:59:30.795291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.235 [2024-09-30 21:59:30.795308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:46.235 [2024-09-30 21:59:30.795318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:46.235 [2024-09-30 21:59:30.795325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:46.235 [2024-09-30 21:59:30.795338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:46.235 [2024-09-30 21:59:30.795347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:46.235 [2024-09-30 21:59:30.795356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:46.235 [2024-09-30 21:59:30.795372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:46.235 [2024-09-30 21:59:30.795380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:46.235 [2024-09-30 21:59:30.795399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.235 [2024-09-30 21:59:30.795417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:46.235 [2024-09-30 21:59:30.795427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.235 [2024-09-30 21:59:30.795444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:46.235 [2024-09-30 21:59:30.795452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.235 [2024-09-30 21:59:30.795469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:46.235 [2024-09-30 21:59:30.795479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:46.235 [2024-09-30 21:59:30.795486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:46.235 [2024-09-30 21:59:30.795495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:46.236 [2024-09-30 21:59:30.795503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:46.236 [2024-09-30 21:59:30.795513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.236 [2024-09-30 21:59:30.795520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:46.236 [2024-09-30 21:59:30.795530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:46.236 [2024-09-30 21:59:30.795538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:46.236 [2024-09-30 21:59:30.795546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:46.236 [2024-09-30 21:59:30.795554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:46.236 [2024-09-30 21:59:30.795563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.236 [2024-09-30 21:59:30.795570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:46.236 [2024-09-30 21:59:30.795579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:46.236 [2024-09-30 21:59:30.795586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.236 [2024-09-30 21:59:30.795596] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:46.236 [2024-09-30 21:59:30.795603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:46.236 [2024-09-30 21:59:30.795611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:46.236 [2024-09-30 21:59:30.795617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:46.236 [2024-09-30 21:59:30.795627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:46.236 [2024-09-30 21:59:30.795633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:46.236 [2024-09-30 21:59:30.795641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:46.236 [2024-09-30 21:59:30.795648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:46.236 [2024-09-30 21:59:30.795657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:46.236 [2024-09-30 21:59:30.795664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:46.236 [2024-09-30 21:59:30.795674] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:46.236 [2024-09-30 21:59:30.795686] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.236 [2024-09-30 21:59:30.795699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:46.236 [2024-09-30 21:59:30.795707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:46.236 [2024-09-30 21:59:30.795718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:46.236 [2024-09-30 21:59:30.795725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:46.236 [2024-09-30 21:59:30.795733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:46.236 [2024-09-30 21:59:30.795741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:46.236 [2024-09-30 21:59:30.795749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:46.236 [2024-09-30 21:59:30.795757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:46.236 [2024-09-30 21:59:30.795766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:46.236 [2024-09-30 21:59:30.795773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:46.236 [2024-09-30 21:59:30.795781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:46.236 [2024-09-30 21:59:30.795788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:46.236 [2024-09-30 21:59:30.795798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:46.236 [2024-09-30 21:59:30.795806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:46.236 [2024-09-30 21:59:30.795814] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:46.236 [2024-09-30 21:59:30.795824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:46.236 [2024-09-30 21:59:30.795836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:46.236 [2024-09-30 21:59:30.795843] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:46.236 [2024-09-30 21:59:30.795851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:46.236 [2024-09-30 21:59:30.795858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:46.236 [2024-09-30 21:59:30.795868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.795876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:46.236 [2024-09-30 21:59:30.795885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:17:46.236 [2024-09-30 21:59:30.795892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.804721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.804752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.236 [2024-09-30 21:59:30.804767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.771 ms 00:17:46.236 [2024-09-30 21:59:30.804778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.804891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.804909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:46.236 [2024-09-30 21:59:30.804920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:46.236 [2024-09-30 21:59:30.804927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.813149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.813179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.236 [2024-09-30 21:59:30.813207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.200 ms 00:17:46.236 [2024-09-30 21:59:30.813216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.813258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.813267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.236 [2024-09-30 21:59:30.813282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:46.236 [2024-09-30 21:59:30.813289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.813652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.813675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.236 [2024-09-30 21:59:30.813685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:17:46.236 [2024-09-30 21:59:30.813695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.813825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.813837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.236 [2024-09-30 21:59:30.813848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:17:46.236 [2024-09-30 21:59:30.813856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.831826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.831871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.236 [2024-09-30 21:59:30.831888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.941 ms 00:17:46.236 [2024-09-30 21:59:30.831897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.834547] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:46.236 [2024-09-30 21:59:30.834587] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:46.236 [2024-09-30 21:59:30.834603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.834612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:46.236 [2024-09-30 21:59:30.834624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.572 ms 00:17:46.236 [2024-09-30 21:59:30.834634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.850873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.850907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:46.236 [2024-09-30 21:59:30.850921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.190 ms 00:17:46.236 [2024-09-30 21:59:30.850930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.852730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.852760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:46.236 [2024-09-30 21:59:30.852771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.729 ms 00:17:46.236 [2024-09-30 21:59:30.852778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.854299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.854328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:46.236 [2024-09-30 21:59:30.854338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:17:46.236 [2024-09-30 21:59:30.854345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.854658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.854677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.236 [2024-09-30 21:59:30.854688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:17:46.236 [2024-09-30 21:59:30.854695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.236 [2024-09-30 21:59:30.869942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.236 [2024-09-30 21:59:30.870104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:46.236 [2024-09-30 21:59:30.870125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.225 ms 00:17:46.236 [2024-09-30 21:59:30.870133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.237 [2024-09-30 21:59:30.877435] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:46.237 [2024-09-30 21:59:30.891126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.237 [2024-09-30 21:59:30.891165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:46.237 [2024-09-30 21:59:30.891175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.946 ms 00:17:46.237 [2024-09-30 21:59:30.891203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.237 [2024-09-30 21:59:30.891284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.237 [2024-09-30 21:59:30.891300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:46.237 [2024-09-30 21:59:30.891308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:46.237 [2024-09-30 21:59:30.891317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.237 [2024-09-30 21:59:30.891365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.237 [2024-09-30 21:59:30.891375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.237 [2024-09-30 21:59:30.891382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:46.237 [2024-09-30 21:59:30.891391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.237 [2024-09-30 21:59:30.891415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.237 [2024-09-30 21:59:30.891427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.237 [2024-09-30 21:59:30.891436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:46.237 [2024-09-30 21:59:30.891445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.237 [2024-09-30 21:59:30.891475] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:46.237 [2024-09-30 21:59:30.891486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.237 [2024-09-30 21:59:30.891494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:46.237 [2024-09-30 21:59:30.891503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:46.237 [2024-09-30 21:59:30.891510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.237 [2024-09-30 21:59:30.894921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.237 [2024-09-30 21:59:30.894954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.237 [2024-09-30 21:59:30.894966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.389 ms 00:17:46.237 [2024-09-30 21:59:30.894977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.237 [2024-09-30 21:59:30.895050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.237 [2024-09-30 21:59:30.895060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.237 [2024-09-30 21:59:30.895070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:46.237 [2024-09-30 21:59:30.895077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.237 [2024-09-30 21:59:30.895860] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:46.237 [2024-09-30 21:59:30.896840] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.900 ms, result 0 00:17:46.237 [2024-09-30 21:59:30.897765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:46.237 Some configs were skipped because the RPC state that can call them passed over. 00:17:46.237 21:59:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:46.496 [2024-09-30 21:59:31.113251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.496 [2024-09-30 21:59:31.113392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:46.496 [2024-09-30 21:59:31.113411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.530 ms 00:17:46.496 [2024-09-30 21:59:31.113421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.496 [2024-09-30 21:59:31.113456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.741 ms, result 0 00:17:46.496 true 00:17:46.496 21:59:31 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:46.767 [2024-09-30 21:59:31.317304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.767 [2024-09-30 21:59:31.317341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:46.767 [2024-09-30 21:59:31.317353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:17:46.767 [2024-09-30 21:59:31.317360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.767 [2024-09-30 21:59:31.317395] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.440 ms, result 0 00:17:46.767 true 00:17:46.767 21:59:31 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 86639 00:17:46.767 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86639 ']' 00:17:46.767 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86639 00:17:46.767 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:46.767 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:46.767 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86639 00:17:46.768 killing process with pid 86639 00:17:46.768 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:46.768 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:46.768 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86639' 00:17:46.768 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86639 00:17:46.768 21:59:31 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86639 00:17:46.768 [2024-09-30 21:59:31.463562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.463614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:46.768 [2024-09-30 21:59:31.463625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:46.768 [2024-09-30 21:59:31.463636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.463657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:46.768 [2024-09-30 21:59:31.464088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.464105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:46.768 [2024-09-30 21:59:31.464118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:17:46.768 [2024-09-30 21:59:31.464129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.464424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.464443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:46.768 [2024-09-30 21:59:31.464454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:46.768 [2024-09-30 21:59:31.464463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.468501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.468536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:46.768 [2024-09-30 21:59:31.468547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.016 ms 00:17:46.768 [2024-09-30 21:59:31.468559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.475502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.475531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:46.768 [2024-09-30 21:59:31.475547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.913 ms 00:17:46.768 [2024-09-30 21:59:31.475555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.477135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.477168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:46.768 [2024-09-30 21:59:31.477179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:17:46.768 [2024-09-30 21:59:31.477205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.480834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.480865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:46.768 [2024-09-30 21:59:31.480876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.591 ms 00:17:46.768 [2024-09-30 21:59:31.480885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.481009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.481019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:46.768 [2024-09-30 21:59:31.481029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:46.768 [2024-09-30 21:59:31.481036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.482915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.483058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:46.768 [2024-09-30 21:59:31.483077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.858 ms 00:17:46.768 [2024-09-30 21:59:31.483085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.484673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.484697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:46.768 [2024-09-30 21:59:31.484707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.550 ms 00:17:46.768 [2024-09-30 21:59:31.484714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.485823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.485856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:46.768 [2024-09-30 21:59:31.485867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:17:46.768 [2024-09-30 21:59:31.485874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.487094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.768 [2024-09-30 21:59:31.487126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:46.768 [2024-09-30 21:59:31.487137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.156 ms 00:17:46.768 [2024-09-30 21:59:31.487143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.768 [2024-09-30 21:59:31.487176] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:46.768 [2024-09-30 21:59:31.487204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:46.768 [2024-09-30 21:59:31.487597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.487986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:46.769 [2024-09-30 21:59:31.488077] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:46.769 [2024-09-30 21:59:31.488089] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33834573-af61-4e72-baa0-f895c2fd95b0 00:17:46.769 [2024-09-30 21:59:31.488098] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:46.769 [2024-09-30 21:59:31.488108] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:46.769 [2024-09-30 21:59:31.488115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:46.769 [2024-09-30 21:59:31.488124] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:46.769 [2024-09-30 21:59:31.488133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:46.769 [2024-09-30 21:59:31.488142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:46.769 [2024-09-30 21:59:31.488149] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:46.769 [2024-09-30 21:59:31.488157] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:46.769 [2024-09-30 21:59:31.488164] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:46.769 [2024-09-30 21:59:31.488173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.769 [2024-09-30 21:59:31.488185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:46.769 [2024-09-30 21:59:31.488206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:17:46.769 [2024-09-30 21:59:31.488214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.769 [2024-09-30 21:59:31.490163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.769 [2024-09-30 21:59:31.490277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:46.769 [2024-09-30 21:59:31.490331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.914 ms 00:17:46.769 [2024-09-30 21:59:31.490355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.769 [2024-09-30 21:59:31.490478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.769 [2024-09-30 21:59:31.490508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:46.769 [2024-09-30 21:59:31.490624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:46.769 [2024-09-30 21:59:31.490647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.769 [2024-09-30 21:59:31.495932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.769 [2024-09-30 21:59:31.496037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.769 [2024-09-30 21:59:31.496110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.769 [2024-09-30 21:59:31.496133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.769 [2024-09-30 21:59:31.496267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.769 [2024-09-30 21:59:31.496338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.769 [2024-09-30 21:59:31.496482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.769 [2024-09-30 21:59:31.496504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.769 [2024-09-30 21:59:31.496593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.769 [2024-09-30 21:59:31.496625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.769 [2024-09-30 21:59:31.496763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.769 [2024-09-30 21:59:31.496792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.769 [2024-09-30 21:59:31.496830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.769 [2024-09-30 21:59:31.496942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.769 [2024-09-30 21:59:31.496967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.769 [2024-09-30 21:59:31.496987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.769 [2024-09-30 21:59:31.506068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.769 [2024-09-30 21:59:31.506204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.769 [2024-09-30 21:59:31.506262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.769 [2024-09-30 21:59:31.506320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.769 [2024-09-30 21:59:31.513337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.769 [2024-09-30 21:59:31.513457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.769 [2024-09-30 21:59:31.513514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.770 [2024-09-30 21:59:31.513536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.770 [2024-09-30 21:59:31.513616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.770 [2024-09-30 21:59:31.513644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.770 [2024-09-30 21:59:31.513692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.770 [2024-09-30 21:59:31.513734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.770 [2024-09-30 21:59:31.513783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.770 [2024-09-30 21:59:31.513829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.770 [2024-09-30 21:59:31.513898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.770 [2024-09-30 21:59:31.513922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.770 [2024-09-30 21:59:31.514030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.770 [2024-09-30 21:59:31.514089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.770 [2024-09-30 21:59:31.514144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.770 [2024-09-30 21:59:31.514241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.770 [2024-09-30 21:59:31.514287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.770 [2024-09-30 21:59:31.514297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:46.770 [2024-09-30 21:59:31.514309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.770 [2024-09-30 21:59:31.514317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.770 [2024-09-30 21:59:31.514367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.770 [2024-09-30 21:59:31.514376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.770 [2024-09-30 21:59:31.514389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.770 [2024-09-30 21:59:31.514399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.770 [2024-09-30 21:59:31.514444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.770 [2024-09-30 21:59:31.514454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.770 [2024-09-30 21:59:31.514464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.770 [2024-09-30 21:59:31.514471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.770 [2024-09-30 21:59:31.514599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 51.012 ms, result 0 00:17:47.054 21:59:31 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:47.054 21:59:31 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.054 [2024-09-30 21:59:31.766464] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:47.054 [2024-09-30 21:59:31.766710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86675 ] 00:17:47.312 [2024-09-30 21:59:31.893883] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:47.312 [2024-09-30 21:59:31.915733] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.312 [2024-09-30 21:59:31.949502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.312 [2024-09-30 21:59:32.037424] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:47.312 [2024-09-30 21:59:32.037486] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:47.571 [2024-09-30 21:59:32.189899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.189942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:47.571 [2024-09-30 21:59:32.189956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:47.571 [2024-09-30 21:59:32.189964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.192215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.192249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:47.571 [2024-09-30 21:59:32.192259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:17:47.571 [2024-09-30 21:59:32.192266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.192343] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:47.571 [2024-09-30 21:59:32.192564] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:47.571 [2024-09-30 21:59:32.192580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.192588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:47.571 [2024-09-30 21:59:32.192597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:47.571 [2024-09-30 21:59:32.192604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.193705] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:47.571 [2024-09-30 21:59:32.196060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.196095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:47.571 [2024-09-30 21:59:32.196106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.356 ms 00:17:47.571 [2024-09-30 21:59:32.196115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.196169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.196179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:47.571 [2024-09-30 21:59:32.196207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:47.571 [2024-09-30 21:59:32.196214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.201135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.201276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:47.571 [2024-09-30 21:59:32.201291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.881 ms 00:17:47.571 [2024-09-30 21:59:32.201298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.201399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.201409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:47.571 [2024-09-30 21:59:32.201417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:47.571 [2024-09-30 21:59:32.201426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.201454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.201464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:47.571 [2024-09-30 21:59:32.201472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:47.571 [2024-09-30 21:59:32.201480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.201500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:47.571 [2024-09-30 21:59:32.202806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.202834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:47.571 [2024-09-30 21:59:32.202843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:17:47.571 [2024-09-30 21:59:32.202850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.202886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.202895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:47.571 [2024-09-30 21:59:32.202905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:47.571 [2024-09-30 21:59:32.202914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.202930] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:47.571 [2024-09-30 21:59:32.202947] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:47.571 [2024-09-30 21:59:32.202980] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:47.571 [2024-09-30 21:59:32.202997] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:47.571 [2024-09-30 21:59:32.203105] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:47.571 [2024-09-30 21:59:32.203116] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:47.571 [2024-09-30 21:59:32.203126] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:47.571 [2024-09-30 21:59:32.203135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:47.571 [2024-09-30 21:59:32.203145] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:47.571 [2024-09-30 21:59:32.203153] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:47.571 [2024-09-30 21:59:32.203160] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:47.571 [2024-09-30 21:59:32.203166] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:47.571 [2024-09-30 21:59:32.203173] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:47.571 [2024-09-30 21:59:32.203181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.203205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:47.571 [2024-09-30 21:59:32.203213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:17:47.571 [2024-09-30 21:59:32.203220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.203307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.571 [2024-09-30 21:59:32.203316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:47.571 [2024-09-30 21:59:32.203323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:47.571 [2024-09-30 21:59:32.203334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.571 [2024-09-30 21:59:32.203434] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:47.571 [2024-09-30 21:59:32.203444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:47.571 [2024-09-30 21:59:32.203458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:47.571 [2024-09-30 21:59:32.203467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.571 [2024-09-30 21:59:32.203476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:47.571 [2024-09-30 21:59:32.203483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:47.571 [2024-09-30 21:59:32.203495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:47.571 [2024-09-30 21:59:32.203503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:47.571 [2024-09-30 21:59:32.203514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:47.572 [2024-09-30 21:59:32.203530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:47.572 [2024-09-30 21:59:32.203537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:47.572 [2024-09-30 21:59:32.203545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:47.572 [2024-09-30 21:59:32.203553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:47.572 [2024-09-30 21:59:32.203560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:47.572 [2024-09-30 21:59:32.203569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:47.572 [2024-09-30 21:59:32.203584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:47.572 [2024-09-30 21:59:32.203591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:47.572 [2024-09-30 21:59:32.203607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:47.572 [2024-09-30 21:59:32.203622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:47.572 [2024-09-30 21:59:32.203630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:47.572 [2024-09-30 21:59:32.203653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:47.572 [2024-09-30 21:59:32.203661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:47.572 [2024-09-30 21:59:32.203676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:47.572 [2024-09-30 21:59:32.203685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:47.572 [2024-09-30 21:59:32.203701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:47.572 [2024-09-30 21:59:32.203709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:47.572 [2024-09-30 21:59:32.203723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:47.572 [2024-09-30 21:59:32.203731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:47.572 [2024-09-30 21:59:32.203739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:47.572 [2024-09-30 21:59:32.203747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:47.572 [2024-09-30 21:59:32.203754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:47.572 [2024-09-30 21:59:32.203762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:47.572 [2024-09-30 21:59:32.203779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:47.572 [2024-09-30 21:59:32.203786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203794] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:47.572 [2024-09-30 21:59:32.203802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:47.572 [2024-09-30 21:59:32.203810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:47.572 [2024-09-30 21:59:32.203817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:47.572 [2024-09-30 21:59:32.203827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:47.572 [2024-09-30 21:59:32.203834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:47.572 [2024-09-30 21:59:32.203841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:47.572 [2024-09-30 21:59:32.203849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:47.572 [2024-09-30 21:59:32.203857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:47.572 [2024-09-30 21:59:32.203864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:47.572 [2024-09-30 21:59:32.203872] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:47.572 [2024-09-30 21:59:32.203883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:47.572 [2024-09-30 21:59:32.203892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:47.572 [2024-09-30 21:59:32.203901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:47.572 [2024-09-30 21:59:32.203907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:47.572 [2024-09-30 21:59:32.203914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:47.572 [2024-09-30 21:59:32.203922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:47.572 [2024-09-30 21:59:32.203928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:47.572 [2024-09-30 21:59:32.203937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:47.572 [2024-09-30 21:59:32.203945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:47.572 [2024-09-30 21:59:32.203952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:47.572 [2024-09-30 21:59:32.203958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:47.572 [2024-09-30 21:59:32.203965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:47.572 [2024-09-30 21:59:32.203974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:47.572 [2024-09-30 21:59:32.203980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:47.572 [2024-09-30 21:59:32.203987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:47.572 [2024-09-30 21:59:32.203994] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:47.572 [2024-09-30 21:59:32.204003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:47.572 [2024-09-30 21:59:32.204013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:47.572 [2024-09-30 21:59:32.204022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:47.572 [2024-09-30 21:59:32.204029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:47.572 [2024-09-30 21:59:32.204037] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:47.572 [2024-09-30 21:59:32.204044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.204052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:47.572 [2024-09-30 21:59:32.204060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:17:47.572 [2024-09-30 21:59:32.204067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.572 [2024-09-30 21:59:32.224072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.224337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:47.572 [2024-09-30 21:59:32.224371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.945 ms 00:17:47.572 [2024-09-30 21:59:32.224387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.572 [2024-09-30 21:59:32.224623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.224655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:47.572 [2024-09-30 21:59:32.224679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:17:47.572 [2024-09-30 21:59:32.224693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.572 [2024-09-30 21:59:32.234842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.234873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:47.572 [2024-09-30 21:59:32.234884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.110 ms 00:17:47.572 [2024-09-30 21:59:32.234891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.572 [2024-09-30 21:59:32.234952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.234964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:47.572 [2024-09-30 21:59:32.234974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:47.572 [2024-09-30 21:59:32.234981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.572 [2024-09-30 21:59:32.235329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.235344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:47.572 [2024-09-30 21:59:32.235352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:17:47.572 [2024-09-30 21:59:32.235385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.572 [2024-09-30 21:59:32.235511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.235524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:47.572 [2024-09-30 21:59:32.235535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:47.572 [2024-09-30 21:59:32.235544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.572 [2024-09-30 21:59:32.240395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.240422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:47.572 [2024-09-30 21:59:32.240437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.829 ms 00:17:47.572 [2024-09-30 21:59:32.240446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.572 [2024-09-30 21:59:32.242810] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:47.572 [2024-09-30 21:59:32.242935] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:47.572 [2024-09-30 21:59:32.242948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.572 [2024-09-30 21:59:32.242956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:47.573 [2024-09-30 21:59:32.242964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.425 ms 00:17:47.573 [2024-09-30 21:59:32.242971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.257688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.257804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:47.573 [2024-09-30 21:59:32.257819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.680 ms 00:17:47.573 [2024-09-30 21:59:32.257827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.259558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.259582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:47.573 [2024-09-30 21:59:32.259590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:17:47.573 [2024-09-30 21:59:32.259597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.261069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.261179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:47.573 [2024-09-30 21:59:32.261209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.436 ms 00:17:47.573 [2024-09-30 21:59:32.261217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.261523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.261540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:47.573 [2024-09-30 21:59:32.261556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:17:47.573 [2024-09-30 21:59:32.261564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.276697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.276735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:47.573 [2024-09-30 21:59:32.276746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.111 ms 00:17:47.573 [2024-09-30 21:59:32.276754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.284108] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:47.573 [2024-09-30 21:59:32.298317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.298347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:47.573 [2024-09-30 21:59:32.298360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.508 ms 00:17:47.573 [2024-09-30 21:59:32.298368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.298462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.298474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:47.573 [2024-09-30 21:59:32.298483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:47.573 [2024-09-30 21:59:32.298492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.298542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.298552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:47.573 [2024-09-30 21:59:32.298560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:47.573 [2024-09-30 21:59:32.298572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.298592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.298600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:47.573 [2024-09-30 21:59:32.298610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:47.573 [2024-09-30 21:59:32.298620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.298660] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:47.573 [2024-09-30 21:59:32.298670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.298678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:47.573 [2024-09-30 21:59:32.298685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:47.573 [2024-09-30 21:59:32.298693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.302093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.302125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:47.573 [2024-09-30 21:59:32.302134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.382 ms 00:17:47.573 [2024-09-30 21:59:32.302142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.302238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:47.573 [2024-09-30 21:59:32.302249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:47.573 [2024-09-30 21:59:32.302257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:47.573 [2024-09-30 21:59:32.302265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:47.573 [2024-09-30 21:59:32.303105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:47.573 [2024-09-30 21:59:32.304066] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.860 ms, result 0 00:17:47.573 [2024-09-30 21:59:32.304586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:47.573 [2024-09-30 21:59:32.314853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:53.267  Copying: 47/256 [MB] (47 MBps) Copying: 90/256 [MB] (43 MBps) Copying: 133/256 [MB] (43 MBps) Copying: 181/256 [MB] (47 MBps) Copying: 226/256 [MB] (44 MBps) Copying: 256/256 [MB] (average 45 MBps)[2024-09-30 21:59:37.996912] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:53.267 [2024-09-30 21:59:37.997980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.267 [2024-09-30 21:59:37.998021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:53.267 [2024-09-30 21:59:37.998035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:53.267 [2024-09-30 21:59:37.998044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.267 [2024-09-30 21:59:37.998064] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:53.267 [2024-09-30 21:59:37.998503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.267 [2024-09-30 21:59:37.998519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:53.267 [2024-09-30 21:59:37.998528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:17:53.267 [2024-09-30 21:59:37.998535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.267 [2024-09-30 21:59:37.998781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.267 [2024-09-30 21:59:37.998791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:53.267 [2024-09-30 21:59:37.998802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:17:53.267 [2024-09-30 21:59:37.998810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.267 [2024-09-30 21:59:38.002510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.267 [2024-09-30 21:59:38.002533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:53.268 [2024-09-30 21:59:38.002543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.682 ms 00:17:53.268 [2024-09-30 21:59:38.002551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.009472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.268 [2024-09-30 21:59:38.009503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:53.268 [2024-09-30 21:59:38.009519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:17:53.268 [2024-09-30 21:59:38.009526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.010950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.268 [2024-09-30 21:59:38.010981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:53.268 [2024-09-30 21:59:38.010990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:17:53.268 [2024-09-30 21:59:38.010998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.014359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.268 [2024-09-30 21:59:38.014391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:53.268 [2024-09-30 21:59:38.014400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:17:53.268 [2024-09-30 21:59:38.014408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.014530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.268 [2024-09-30 21:59:38.014540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:53.268 [2024-09-30 21:59:38.014549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:53.268 [2024-09-30 21:59:38.014555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.016238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.268 [2024-09-30 21:59:38.016267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:53.268 [2024-09-30 21:59:38.016275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:17:53.268 [2024-09-30 21:59:38.016282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.017628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.268 [2024-09-30 21:59:38.017657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:53.268 [2024-09-30 21:59:38.017665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:17:53.268 [2024-09-30 21:59:38.017672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.018770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.268 [2024-09-30 21:59:38.018799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:53.268 [2024-09-30 21:59:38.018808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:17:53.268 [2024-09-30 21:59:38.018814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.019686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.268 [2024-09-30 21:59:38.019714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:53.268 [2024-09-30 21:59:38.019723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:17:53.268 [2024-09-30 21:59:38.019730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.268 [2024-09-30 21:59:38.019746] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:53.268 [2024-09-30 21:59:38.019759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.019998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:53.268 [2024-09-30 21:59:38.020220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:53.269 [2024-09-30 21:59:38.020536] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:53.269 [2024-09-30 21:59:38.020544] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33834573-af61-4e72-baa0-f895c2fd95b0 00:17:53.269 [2024-09-30 21:59:38.020551] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:53.269 [2024-09-30 21:59:38.020558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:53.269 [2024-09-30 21:59:38.020565] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:53.269 [2024-09-30 21:59:38.020572] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:53.269 [2024-09-30 21:59:38.020579] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:53.269 [2024-09-30 21:59:38.020593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:53.269 [2024-09-30 21:59:38.020600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:53.269 [2024-09-30 21:59:38.020606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:53.269 [2024-09-30 21:59:38.020612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:53.269 [2024-09-30 21:59:38.020619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.269 [2024-09-30 21:59:38.020628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:53.269 [2024-09-30 21:59:38.020636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:17:53.269 [2024-09-30 21:59:38.020643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.022024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.269 [2024-09-30 21:59:38.022048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:53.269 [2024-09-30 21:59:38.022056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:17:53.269 [2024-09-30 21:59:38.022063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.022141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.269 [2024-09-30 21:59:38.022150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:53.269 [2024-09-30 21:59:38.022157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:53.269 [2024-09-30 21:59:38.022165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.026964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.026998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.269 [2024-09-30 21:59:38.027007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.027021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.027084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.027093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.269 [2024-09-30 21:59:38.027101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.027108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.027141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.027150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.269 [2024-09-30 21:59:38.027158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.027168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.027197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.027208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.269 [2024-09-30 21:59:38.027216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.027224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.035945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.035984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.269 [2024-09-30 21:59:38.035994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.036002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.042924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.042962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.269 [2024-09-30 21:59:38.042971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.042988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.043028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.043037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.269 [2024-09-30 21:59:38.043045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.043053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.043080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.043089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.269 [2024-09-30 21:59:38.043099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.043107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.043165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.043174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.269 [2024-09-30 21:59:38.043182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.043203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.043232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.269 [2024-09-30 21:59:38.043242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:53.269 [2024-09-30 21:59:38.043250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.269 [2024-09-30 21:59:38.043264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.269 [2024-09-30 21:59:38.043296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.270 [2024-09-30 21:59:38.043305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.270 [2024-09-30 21:59:38.043313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.270 [2024-09-30 21:59:38.043320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.270 [2024-09-30 21:59:38.043365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.270 [2024-09-30 21:59:38.043376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.270 [2024-09-30 21:59:38.043386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.270 [2024-09-30 21:59:38.043393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.270 [2024-09-30 21:59:38.043515] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.517 ms, result 0 00:17:53.527 00:17:53.527 00:17:53.527 21:59:38 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:53.527 21:59:38 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:54.093 21:59:38 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:54.093 [2024-09-30 21:59:38.838777] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:54.093 [2024-09-30 21:59:38.838884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86752 ] 00:17:54.351 [2024-09-30 21:59:38.966108] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:54.351 [2024-09-30 21:59:38.986868] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.351 [2024-09-30 21:59:39.020345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.351 [2024-09-30 21:59:39.107781] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.351 [2024-09-30 21:59:39.107843] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.611 [2024-09-30 21:59:39.259885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.259929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.611 [2024-09-30 21:59:39.259941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.611 [2024-09-30 21:59:39.259949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.262123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.262160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.611 [2024-09-30 21:59:39.262170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:17:54.611 [2024-09-30 21:59:39.262177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.262254] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.611 [2024-09-30 21:59:39.262471] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.611 [2024-09-30 21:59:39.262493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.262501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.611 [2024-09-30 21:59:39.262510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:17:54.611 [2024-09-30 21:59:39.262517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.263692] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.611 [2024-09-30 21:59:39.265909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.265947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.611 [2024-09-30 21:59:39.265957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.218 ms 00:17:54.611 [2024-09-30 21:59:39.265970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.266023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.266034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.611 [2024-09-30 21:59:39.266042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:54.611 [2024-09-30 21:59:39.266049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.270924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.270954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.611 [2024-09-30 21:59:39.270963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.836 ms 00:17:54.611 [2024-09-30 21:59:39.270973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.271067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.271079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.611 [2024-09-30 21:59:39.271087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:54.611 [2024-09-30 21:59:39.271095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.271118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.271129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.611 [2024-09-30 21:59:39.271136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.611 [2024-09-30 21:59:39.271143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.271162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:54.611 [2024-09-30 21:59:39.272492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.272520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.611 [2024-09-30 21:59:39.272529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.335 ms 00:17:54.611 [2024-09-30 21:59:39.272536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.272571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.272584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.611 [2024-09-30 21:59:39.272594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:54.611 [2024-09-30 21:59:39.272602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.272622] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.611 [2024-09-30 21:59:39.272638] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:54.611 [2024-09-30 21:59:39.272672] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.611 [2024-09-30 21:59:39.272688] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:54.611 [2024-09-30 21:59:39.272788] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.611 [2024-09-30 21:59:39.272805] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.611 [2024-09-30 21:59:39.272819] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.611 [2024-09-30 21:59:39.272829] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.611 [2024-09-30 21:59:39.272837] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.611 [2024-09-30 21:59:39.272849] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:54.611 [2024-09-30 21:59:39.272856] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.611 [2024-09-30 21:59:39.272863] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.611 [2024-09-30 21:59:39.272870] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.611 [2024-09-30 21:59:39.272877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.272887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.611 [2024-09-30 21:59:39.272895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:17:54.611 [2024-09-30 21:59:39.272902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.272988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.611 [2024-09-30 21:59:39.272997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.611 [2024-09-30 21:59:39.273005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:54.611 [2024-09-30 21:59:39.273011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.611 [2024-09-30 21:59:39.273107] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.611 [2024-09-30 21:59:39.273124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.611 [2024-09-30 21:59:39.273135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.611 [2024-09-30 21:59:39.273143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-09-30 21:59:39.273152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.611 [2024-09-30 21:59:39.273161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.611 [2024-09-30 21:59:39.273173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:54.611 [2024-09-30 21:59:39.273182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.611 [2024-09-30 21:59:39.273203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:54.611 [2024-09-30 21:59:39.273211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.611 [2024-09-30 21:59:39.273219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.611 [2024-09-30 21:59:39.273227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:54.611 [2024-09-30 21:59:39.273234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.611 [2024-09-30 21:59:39.273242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.611 [2024-09-30 21:59:39.273250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:54.611 [2024-09-30 21:59:39.273257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-09-30 21:59:39.273265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.611 [2024-09-30 21:59:39.273272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:54.611 [2024-09-30 21:59:39.273280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.611 [2024-09-30 21:59:39.273289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.612 [2024-09-30 21:59:39.273298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:54.612 [2024-09-30 21:59:39.273306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.612 [2024-09-30 21:59:39.273313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.612 [2024-09-30 21:59:39.273320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:54.612 [2024-09-30 21:59:39.273333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.612 [2024-09-30 21:59:39.273341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.612 [2024-09-30 21:59:39.273349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:54.612 [2024-09-30 21:59:39.273356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.612 [2024-09-30 21:59:39.273364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.612 [2024-09-30 21:59:39.273371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:54.612 [2024-09-30 21:59:39.273379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.612 [2024-09-30 21:59:39.273386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.612 [2024-09-30 21:59:39.273394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:54.612 [2024-09-30 21:59:39.273401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.612 [2024-09-30 21:59:39.273408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.612 [2024-09-30 21:59:39.273415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:54.612 [2024-09-30 21:59:39.273422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.612 [2024-09-30 21:59:39.273430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.612 [2024-09-30 21:59:39.273438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:54.612 [2024-09-30 21:59:39.273445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.612 [2024-09-30 21:59:39.273455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.612 [2024-09-30 21:59:39.273462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:54.612 [2024-09-30 21:59:39.273469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.612 [2024-09-30 21:59:39.273477] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.612 [2024-09-30 21:59:39.273486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.612 [2024-09-30 21:59:39.273496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.612 [2024-09-30 21:59:39.273504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.612 [2024-09-30 21:59:39.273512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.612 [2024-09-30 21:59:39.273520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.612 [2024-09-30 21:59:39.273528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.612 [2024-09-30 21:59:39.273535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.612 [2024-09-30 21:59:39.273541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.612 [2024-09-30 21:59:39.273548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.612 [2024-09-30 21:59:39.273556] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.612 [2024-09-30 21:59:39.273565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.612 [2024-09-30 21:59:39.273573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:54.612 [2024-09-30 21:59:39.273582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:54.612 [2024-09-30 21:59:39.273589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:54.612 [2024-09-30 21:59:39.273596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:54.612 [2024-09-30 21:59:39.273603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:54.612 [2024-09-30 21:59:39.273610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:54.612 [2024-09-30 21:59:39.273617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:54.612 [2024-09-30 21:59:39.273624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:54.612 [2024-09-30 21:59:39.273631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:54.612 [2024-09-30 21:59:39.273638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:54.612 [2024-09-30 21:59:39.273645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:54.612 [2024-09-30 21:59:39.273652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:54.612 [2024-09-30 21:59:39.273659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:54.612 [2024-09-30 21:59:39.273666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:54.612 [2024-09-30 21:59:39.273673] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.612 [2024-09-30 21:59:39.273680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.612 [2024-09-30 21:59:39.273687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.612 [2024-09-30 21:59:39.273697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.612 [2024-09-30 21:59:39.273704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.612 [2024-09-30 21:59:39.273711] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.612 [2024-09-30 21:59:39.273718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.273727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.612 [2024-09-30 21:59:39.273738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:17:54.612 [2024-09-30 21:59:39.273745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.295027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.295127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.612 [2024-09-30 21:59:39.295166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.222 ms 00:17:54.612 [2024-09-30 21:59:39.295219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.295592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.295654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.612 [2024-09-30 21:59:39.295691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:17:54.612 [2024-09-30 21:59:39.295728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.305401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.305438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.612 [2024-09-30 21:59:39.305447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.615 ms 00:17:54.612 [2024-09-30 21:59:39.305455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.305512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.305524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.612 [2024-09-30 21:59:39.305532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.612 [2024-09-30 21:59:39.305539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.305851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.305873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.612 [2024-09-30 21:59:39.305883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:17:54.612 [2024-09-30 21:59:39.305890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.306014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.306031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.612 [2024-09-30 21:59:39.306042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:54.612 [2024-09-30 21:59:39.306051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.310826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.310860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.612 [2024-09-30 21:59:39.310870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.753 ms 00:17:54.612 [2024-09-30 21:59:39.310879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.313143] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:54.612 [2024-09-30 21:59:39.313177] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.612 [2024-09-30 21:59:39.313199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.313209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.612 [2024-09-30 21:59:39.313218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:17:54.612 [2024-09-30 21:59:39.313225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.327690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.327722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.612 [2024-09-30 21:59:39.327733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.424 ms 00:17:54.612 [2024-09-30 21:59:39.327741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.329359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.612 [2024-09-30 21:59:39.329390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.612 [2024-09-30 21:59:39.329398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:17:54.612 [2024-09-30 21:59:39.329405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.612 [2024-09-30 21:59:39.330777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.330806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.613 [2024-09-30 21:59:39.330820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:17:54.613 [2024-09-30 21:59:39.330827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.331135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.331158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.613 [2024-09-30 21:59:39.331168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:17:54.613 [2024-09-30 21:59:39.331175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.346337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.346375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.613 [2024-09-30 21:59:39.346385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.130 ms 00:17:54.613 [2024-09-30 21:59:39.346394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.353686] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.613 [2024-09-30 21:59:39.367481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.367517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.613 [2024-09-30 21:59:39.367527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.035 ms 00:17:54.613 [2024-09-30 21:59:39.367535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.367621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.367632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.613 [2024-09-30 21:59:39.367641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.613 [2024-09-30 21:59:39.367651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.367696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.367705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.613 [2024-09-30 21:59:39.367713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:54.613 [2024-09-30 21:59:39.367720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.367741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.367749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.613 [2024-09-30 21:59:39.367757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.613 [2024-09-30 21:59:39.367764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.367797] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.613 [2024-09-30 21:59:39.367808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.367815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.613 [2024-09-30 21:59:39.367823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:54.613 [2024-09-30 21:59:39.367830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.371040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.371082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.613 [2024-09-30 21:59:39.371092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.192 ms 00:17:54.613 [2024-09-30 21:59:39.371101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.371173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.613 [2024-09-30 21:59:39.371197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.613 [2024-09-30 21:59:39.371207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:54.613 [2024-09-30 21:59:39.371216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.613 [2024-09-30 21:59:39.372207] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.613 [2024-09-30 21:59:39.373180] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.043 ms, result 0 00:17:54.613 [2024-09-30 21:59:39.373822] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.613 [2024-09-30 21:59:39.383948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.872  Copying: 4096/4096 [kB] (average 38 MBps)[2024-09-30 21:59:39.487825] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.872 [2024-09-30 21:59:39.488460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.488495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:54.872 [2024-09-30 21:59:39.488506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:54.872 [2024-09-30 21:59:39.488514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.488533] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:54.872 [2024-09-30 21:59:39.488952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.488978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:54.872 [2024-09-30 21:59:39.488987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:17:54.872 [2024-09-30 21:59:39.488994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.490395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.490427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:54.872 [2024-09-30 21:59:39.490436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.381 ms 00:17:54.872 [2024-09-30 21:59:39.490447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.494211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.494236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:54.872 [2024-09-30 21:59:39.494250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.749 ms 00:17:54.872 [2024-09-30 21:59:39.494258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.501156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.501197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:54.872 [2024-09-30 21:59:39.501207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.873 ms 00:17:54.872 [2024-09-30 21:59:39.501214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.502463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.502494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:54.872 [2024-09-30 21:59:39.502503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:17:54.872 [2024-09-30 21:59:39.502509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.505864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.505906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:54.872 [2024-09-30 21:59:39.505915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:17:54.872 [2024-09-30 21:59:39.505923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.506045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.506063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:54.872 [2024-09-30 21:59:39.506072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:17:54.872 [2024-09-30 21:59:39.506080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.507524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.872 [2024-09-30 21:59:39.507555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:54.872 [2024-09-30 21:59:39.507565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:17:54.872 [2024-09-30 21:59:39.507573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.872 [2024-09-30 21:59:39.508901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-09-30 21:59:39.508931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:54.873 [2024-09-30 21:59:39.508940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:17:54.873 [2024-09-30 21:59:39.508946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-09-30 21:59:39.510003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-09-30 21:59:39.510033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:54.873 [2024-09-30 21:59:39.510041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:17:54.873 [2024-09-30 21:59:39.510049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-09-30 21:59:39.511108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.873 [2024-09-30 21:59:39.511138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:54.873 [2024-09-30 21:59:39.511146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.009 ms 00:17:54.873 [2024-09-30 21:59:39.511153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.873 [2024-09-30 21:59:39.511180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:54.873 [2024-09-30 21:59:39.511213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:54.873 [2024-09-30 21:59:39.511733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:54.874 [2024-09-30 21:59:39.511960] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:54.874 [2024-09-30 21:59:39.511967] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33834573-af61-4e72-baa0-f895c2fd95b0 00:17:54.874 [2024-09-30 21:59:39.511975] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:54.874 [2024-09-30 21:59:39.511982] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:54.874 [2024-09-30 21:59:39.511989] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:54.874 [2024-09-30 21:59:39.511996] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:54.874 [2024-09-30 21:59:39.512003] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:54.874 [2024-09-30 21:59:39.512017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:54.874 [2024-09-30 21:59:39.512025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:54.874 [2024-09-30 21:59:39.512031] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:54.874 [2024-09-30 21:59:39.512038] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:54.874 [2024-09-30 21:59:39.512044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.874 [2024-09-30 21:59:39.512055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:54.874 [2024-09-30 21:59:39.512063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:17:54.874 [2024-09-30 21:59:39.512070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.513267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.874 [2024-09-30 21:59:39.513288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:54.874 [2024-09-30 21:59:39.513302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:17:54.874 [2024-09-30 21:59:39.513312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.513389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.874 [2024-09-30 21:59:39.513398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:54.874 [2024-09-30 21:59:39.513409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:54.874 [2024-09-30 21:59:39.513416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.518166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.518209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.874 [2024-09-30 21:59:39.518218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.518225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.518277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.518284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.874 [2024-09-30 21:59:39.518292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.518299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.518332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.518341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.874 [2024-09-30 21:59:39.518348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.518355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.518372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.518387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.874 [2024-09-30 21:59:39.518395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.518402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.527064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.527101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.874 [2024-09-30 21:59:39.527111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.527119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.534035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.534072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.874 [2024-09-30 21:59:39.534089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.534098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.534123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.534131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.874 [2024-09-30 21:59:39.534139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.534150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.534177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.534198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.874 [2024-09-30 21:59:39.534209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.534217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.534279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.534290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.874 [2024-09-30 21:59:39.534297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.534306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.534335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.874 [2024-09-30 21:59:39.534345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:54.874 [2024-09-30 21:59:39.534352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.874 [2024-09-30 21:59:39.534366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.874 [2024-09-30 21:59:39.534398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.875 [2024-09-30 21:59:39.534407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.875 [2024-09-30 21:59:39.534416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.875 [2024-09-30 21:59:39.534424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.875 [2024-09-30 21:59:39.534461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:54.875 [2024-09-30 21:59:39.534471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.875 [2024-09-30 21:59:39.534481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:54.875 [2024-09-30 21:59:39.534488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.875 [2024-09-30 21:59:39.534608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 46.131 ms, result 0 00:17:55.133 00:17:55.133 00:17:55.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.133 21:59:39 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=86766 00:17:55.133 21:59:39 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 86766 00:17:55.133 21:59:39 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:55.133 21:59:39 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 86766 ']' 00:17:55.133 21:59:39 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.133 21:59:39 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:55.133 21:59:39 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.133 21:59:39 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:55.133 21:59:39 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:55.133 [2024-09-30 21:59:39.798436] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:55.133 [2024-09-30 21:59:39.798551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86766 ] 00:17:55.133 [2024-09-30 21:59:39.925925] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:55.133 [2024-09-30 21:59:39.938362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.391 [2024-09-30 21:59:39.972268] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.957 21:59:40 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:55.957 21:59:40 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:55.957 21:59:40 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:56.217 [2024-09-30 21:59:40.822438] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:56.217 [2024-09-30 21:59:40.822497] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:56.217 [2024-09-30 21:59:40.988073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:40.988170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:56.217 [2024-09-30 21:59:40.988231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:56.217 [2024-09-30 21:59:40.988257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:40.990835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:40.990872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.217 [2024-09-30 21:59:40.990883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:17:56.217 [2024-09-30 21:59:40.990890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:40.991300] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:56.217 [2024-09-30 21:59:40.991573] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:56.217 [2024-09-30 21:59:40.991605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:40.991614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.217 [2024-09-30 21:59:40.991628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:17:56.217 [2024-09-30 21:59:40.991636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:40.992814] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:56.217 [2024-09-30 21:59:40.995020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:40.995056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:56.217 [2024-09-30 21:59:40.995066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.212 ms 00:17:56.217 [2024-09-30 21:59:40.995075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:40.995138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:40.995156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:56.217 [2024-09-30 21:59:40.995165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:56.217 [2024-09-30 21:59:40.995173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:41.000061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:41.000093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.217 [2024-09-30 21:59:41.000106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.813 ms 00:17:56.217 [2024-09-30 21:59:41.000115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:41.000220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:41.000233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.217 [2024-09-30 21:59:41.000242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:56.217 [2024-09-30 21:59:41.000250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:41.000279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:41.000308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:56.217 [2024-09-30 21:59:41.000319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:56.217 [2024-09-30 21:59:41.000328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:41.000350] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:56.217 [2024-09-30 21:59:41.001640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:41.001667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.217 [2024-09-30 21:59:41.001677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:17:56.217 [2024-09-30 21:59:41.001687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:41.001733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:41.001742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:56.217 [2024-09-30 21:59:41.001752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:56.217 [2024-09-30 21:59:41.001759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:41.001782] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:56.217 [2024-09-30 21:59:41.001799] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:56.217 [2024-09-30 21:59:41.001841] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:56.217 [2024-09-30 21:59:41.001858] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:56.217 [2024-09-30 21:59:41.001963] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:56.217 [2024-09-30 21:59:41.001981] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:56.217 [2024-09-30 21:59:41.001998] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:56.217 [2024-09-30 21:59:41.002009] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:56.217 [2024-09-30 21:59:41.002021] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:56.217 [2024-09-30 21:59:41.002032] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:56.217 [2024-09-30 21:59:41.002044] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:56.217 [2024-09-30 21:59:41.002052] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:56.217 [2024-09-30 21:59:41.002060] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:56.217 [2024-09-30 21:59:41.002069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.217 [2024-09-30 21:59:41.002079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:56.217 [2024-09-30 21:59:41.002087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:17:56.217 [2024-09-30 21:59:41.002098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.217 [2024-09-30 21:59:41.002197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-09-30 21:59:41.002214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:56.218 [2024-09-30 21:59:41.002222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:56.218 [2024-09-30 21:59:41.002232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-09-30 21:59:41.002334] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:56.218 [2024-09-30 21:59:41.002354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:56.218 [2024-09-30 21:59:41.002364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:56.218 [2024-09-30 21:59:41.002395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:56.218 [2024-09-30 21:59:41.002422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.218 [2024-09-30 21:59:41.002438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:56.218 [2024-09-30 21:59:41.002448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:56.218 [2024-09-30 21:59:41.002456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:56.218 [2024-09-30 21:59:41.002470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:56.218 [2024-09-30 21:59:41.002478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:56.218 [2024-09-30 21:59:41.002487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:56.218 [2024-09-30 21:59:41.002505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:56.218 [2024-09-30 21:59:41.002534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:56.218 [2024-09-30 21:59:41.002560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:56.218 [2024-09-30 21:59:41.002586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:56.218 [2024-09-30 21:59:41.002614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:56.218 [2024-09-30 21:59:41.002641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.218 [2024-09-30 21:59:41.002658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:56.218 [2024-09-30 21:59:41.002669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:56.218 [2024-09-30 21:59:41.002676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:56.218 [2024-09-30 21:59:41.002686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:56.218 [2024-09-30 21:59:41.002694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:56.218 [2024-09-30 21:59:41.002703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:56.218 [2024-09-30 21:59:41.002719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:56.218 [2024-09-30 21:59:41.002726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002734] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:56.218 [2024-09-30 21:59:41.002742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:56.218 [2024-09-30 21:59:41.002750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:56.218 [2024-09-30 21:59:41.002771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:56.218 [2024-09-30 21:59:41.002778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:56.218 [2024-09-30 21:59:41.002786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:56.218 [2024-09-30 21:59:41.002793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:56.218 [2024-09-30 21:59:41.002803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:56.218 [2024-09-30 21:59:41.002809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:56.218 [2024-09-30 21:59:41.002818] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:56.218 [2024-09-30 21:59:41.002828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.218 [2024-09-30 21:59:41.002837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:56.218 [2024-09-30 21:59:41.002845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:56.218 [2024-09-30 21:59:41.002853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:56.218 [2024-09-30 21:59:41.002861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:56.218 [2024-09-30 21:59:41.002870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:56.218 [2024-09-30 21:59:41.002879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:56.218 [2024-09-30 21:59:41.002887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:56.218 [2024-09-30 21:59:41.002895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:56.218 [2024-09-30 21:59:41.002904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:56.218 [2024-09-30 21:59:41.002911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:56.218 [2024-09-30 21:59:41.002920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:56.218 [2024-09-30 21:59:41.002927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:56.218 [2024-09-30 21:59:41.002937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:56.218 [2024-09-30 21:59:41.002945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:56.218 [2024-09-30 21:59:41.002953] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:56.218 [2024-09-30 21:59:41.002961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:56.218 [2024-09-30 21:59:41.002974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:56.218 [2024-09-30 21:59:41.002981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:56.218 [2024-09-30 21:59:41.002990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:56.218 [2024-09-30 21:59:41.002998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:56.218 [2024-09-30 21:59:41.003007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-09-30 21:59:41.003014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:56.218 [2024-09-30 21:59:41.003023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:17:56.218 [2024-09-30 21:59:41.003030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-09-30 21:59:41.011758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-09-30 21:59:41.011791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.218 [2024-09-30 21:59:41.011802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.672 ms 00:17:56.218 [2024-09-30 21:59:41.011814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-09-30 21:59:41.011925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-09-30 21:59:41.011958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.218 [2024-09-30 21:59:41.011968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:56.218 [2024-09-30 21:59:41.011975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-09-30 21:59:41.020118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-09-30 21:59:41.020151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.218 [2024-09-30 21:59:41.020161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.120 ms 00:17:56.218 [2024-09-30 21:59:41.020170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-09-30 21:59:41.020233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-09-30 21:59:41.020244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.218 [2024-09-30 21:59:41.020254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.218 [2024-09-30 21:59:41.020261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.218 [2024-09-30 21:59:41.020571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.218 [2024-09-30 21:59:41.020596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.218 [2024-09-30 21:59:41.020607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:17:56.218 [2024-09-30 21:59:41.020615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.219 [2024-09-30 21:59:41.020746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.219 [2024-09-30 21:59:41.020768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.219 [2024-09-30 21:59:41.020778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:56.219 [2024-09-30 21:59:41.020787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.477 [2024-09-30 21:59:41.033446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.477 [2024-09-30 21:59:41.033484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.477 [2024-09-30 21:59:41.033498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.633 ms 00:17:56.477 [2024-09-30 21:59:41.033507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.477 [2024-09-30 21:59:41.035833] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:56.477 [2024-09-30 21:59:41.035867] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:56.477 [2024-09-30 21:59:41.035881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.477 [2024-09-30 21:59:41.035890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:56.478 [2024-09-30 21:59:41.035900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.253 ms 00:17:56.478 [2024-09-30 21:59:41.035907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.050814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.050846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:56.478 [2024-09-30 21:59:41.050861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.863 ms 00:17:56.478 [2024-09-30 21:59:41.050870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.052549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.052581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:56.478 [2024-09-30 21:59:41.052591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.611 ms 00:17:56.478 [2024-09-30 21:59:41.052599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.053974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.054004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:56.478 [2024-09-30 21:59:41.054015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:17:56.478 [2024-09-30 21:59:41.054023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.054348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.054372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.478 [2024-09-30 21:59:41.054383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:56.478 [2024-09-30 21:59:41.054391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.069469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.069504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:56.478 [2024-09-30 21:59:41.069519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.055 ms 00:17:56.478 [2024-09-30 21:59:41.069527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.076833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:56.478 [2024-09-30 21:59:41.090510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.090542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.478 [2024-09-30 21:59:41.090557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.932 ms 00:17:56.478 [2024-09-30 21:59:41.090567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.090655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.090667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:56.478 [2024-09-30 21:59:41.090683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:56.478 [2024-09-30 21:59:41.090696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.090750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.090765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.478 [2024-09-30 21:59:41.090776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:56.478 [2024-09-30 21:59:41.090787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.090810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.090822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.478 [2024-09-30 21:59:41.090833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:56.478 [2024-09-30 21:59:41.090844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.090873] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:56.478 [2024-09-30 21:59:41.090884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.090895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:56.478 [2024-09-30 21:59:41.090904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:56.478 [2024-09-30 21:59:41.090915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.094310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.094340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.478 [2024-09-30 21:59:41.094352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:17:56.478 [2024-09-30 21:59:41.094360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.094500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.478 [2024-09-30 21:59:41.094522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.478 [2024-09-30 21:59:41.094532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:56.478 [2024-09-30 21:59:41.094540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.478 [2024-09-30 21:59:41.095651] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.478 [2024-09-30 21:59:41.096642] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 107.391 ms, result 0 00:17:56.478 [2024-09-30 21:59:41.097697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.478 Some configs were skipped because the RPC state that can call them passed over. 00:17:56.478 21:59:41 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:56.736 [2024-09-30 21:59:41.324989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.736 [2024-09-30 21:59:41.325031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:56.736 [2024-09-30 21:59:41.325042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:17:56.736 [2024-09-30 21:59:41.325053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.736 [2024-09-30 21:59:41.325085] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.655 ms, result 0 00:17:56.736 true 00:17:56.737 21:59:41 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:56.737 [2024-09-30 21:59:41.528997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.737 [2024-09-30 21:59:41.529032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:56.737 [2024-09-30 21:59:41.529043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:17:56.737 [2024-09-30 21:59:41.529050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.737 [2024-09-30 21:59:41.529084] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.384 ms, result 0 00:17:56.737 true 00:17:56.737 21:59:41 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 86766 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86766 ']' 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86766 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86766 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86766' 00:17:56.997 killing process with pid 86766 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 86766 00:17:56.997 21:59:41 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 86766 00:17:56.997 [2024-09-30 21:59:41.670600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.670654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:56.997 [2024-09-30 21:59:41.670666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.997 [2024-09-30 21:59:41.670676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.670698] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:56.997 [2024-09-30 21:59:41.671124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.671155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:56.997 [2024-09-30 21:59:41.671166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.410 ms 00:17:56.997 [2024-09-30 21:59:41.671175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.671464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.671482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:56.997 [2024-09-30 21:59:41.671493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:17:56.997 [2024-09-30 21:59:41.671503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.675508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.675544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:56.997 [2024-09-30 21:59:41.675557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.983 ms 00:17:56.997 [2024-09-30 21:59:41.675565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.682470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.682500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:56.997 [2024-09-30 21:59:41.682514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.870 ms 00:17:56.997 [2024-09-30 21:59:41.682522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.684084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.684116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:56.997 [2024-09-30 21:59:41.684127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:17:56.997 [2024-09-30 21:59:41.684134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.687426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.687463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:56.997 [2024-09-30 21:59:41.687474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:17:56.997 [2024-09-30 21:59:41.687481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.687607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.687624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:56.997 [2024-09-30 21:59:41.687634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:56.997 [2024-09-30 21:59:41.687641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.689559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.689605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:56.997 [2024-09-30 21:59:41.689620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.895 ms 00:17:56.997 [2024-09-30 21:59:41.689627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.691000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.691029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:56.997 [2024-09-30 21:59:41.691040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.337 ms 00:17:56.997 [2024-09-30 21:59:41.691047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.692257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.692293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:56.997 [2024-09-30 21:59:41.692303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:17:56.997 [2024-09-30 21:59:41.692310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.693250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.997 [2024-09-30 21:59:41.693280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:56.997 [2024-09-30 21:59:41.693290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.880 ms 00:17:56.997 [2024-09-30 21:59:41.693297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.997 [2024-09-30 21:59:41.693329] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:56.997 [2024-09-30 21:59:41.693345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:56.997 [2024-09-30 21:59:41.693500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.693995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:56.998 [2024-09-30 21:59:41.694223] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:56.998 [2024-09-30 21:59:41.694233] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33834573-af61-4e72-baa0-f895c2fd95b0 00:17:56.998 [2024-09-30 21:59:41.694241] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:56.998 [2024-09-30 21:59:41.694250] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:56.998 [2024-09-30 21:59:41.694260] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:56.998 [2024-09-30 21:59:41.694271] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:56.998 [2024-09-30 21:59:41.694279] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:56.998 [2024-09-30 21:59:41.694290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:56.998 [2024-09-30 21:59:41.694297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:56.998 [2024-09-30 21:59:41.694304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:56.998 [2024-09-30 21:59:41.694312] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:56.999 [2024-09-30 21:59:41.694322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.999 [2024-09-30 21:59:41.694329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:56.999 [2024-09-30 21:59:41.694343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:17:56.999 [2024-09-30 21:59:41.694351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.695735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.999 [2024-09-30 21:59:41.695759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:56.999 [2024-09-30 21:59:41.695770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.365 ms 00:17:56.999 [2024-09-30 21:59:41.695777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.695862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.999 [2024-09-30 21:59:41.695879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:56.999 [2024-09-30 21:59:41.695888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:56.999 [2024-09-30 21:59:41.695896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.701101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.701133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.999 [2024-09-30 21:59:41.701144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.701151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.701220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.701230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.999 [2024-09-30 21:59:41.701242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.701251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.701288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.701299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.999 [2024-09-30 21:59:41.701309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.701317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.701336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.701344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.999 [2024-09-30 21:59:41.701353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.701361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.710040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.710075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.999 [2024-09-30 21:59:41.710087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.710095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.717061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.717096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.999 [2024-09-30 21:59:41.717109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.717117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.717155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.717164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.999 [2024-09-30 21:59:41.717176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.717184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.717238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.717247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.999 [2024-09-30 21:59:41.717257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.717265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.717327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.717338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.999 [2024-09-30 21:59:41.717348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.717357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.717390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.717406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:56.999 [2024-09-30 21:59:41.717417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.717425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.717461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.717471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.999 [2024-09-30 21:59:41.717480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.717490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.717533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.999 [2024-09-30 21:59:41.717543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.999 [2024-09-30 21:59:41.717553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.999 [2024-09-30 21:59:41.717560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.999 [2024-09-30 21:59:41.717680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 47.059 ms, result 0 00:17:57.257 21:59:41 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:57.257 [2024-09-30 21:59:41.966286] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:17:57.257 [2024-09-30 21:59:41.966396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86804 ] 00:17:57.515 [2024-09-30 21:59:42.093728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:57.515 [2024-09-30 21:59:42.116086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.515 [2024-09-30 21:59:42.149437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.515 [2024-09-30 21:59:42.236537] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:57.515 [2024-09-30 21:59:42.236597] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:57.774 [2024-09-30 21:59:42.389307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.774 [2024-09-30 21:59:42.389349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:57.774 [2024-09-30 21:59:42.389361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:57.774 [2024-09-30 21:59:42.389369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.774 [2024-09-30 21:59:42.391526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.774 [2024-09-30 21:59:42.391561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.774 [2024-09-30 21:59:42.391570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.142 ms 00:17:57.774 [2024-09-30 21:59:42.391578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.774 [2024-09-30 21:59:42.391642] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:57.774 [2024-09-30 21:59:42.391855] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:57.774 [2024-09-30 21:59:42.391877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.774 [2024-09-30 21:59:42.391885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.774 [2024-09-30 21:59:42.391893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:57.774 [2024-09-30 21:59:42.391900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.774 [2024-09-30 21:59:42.393127] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:57.774 [2024-09-30 21:59:42.395248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.774 [2024-09-30 21:59:42.395285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:57.774 [2024-09-30 21:59:42.395295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:17:57.774 [2024-09-30 21:59:42.395304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.774 [2024-09-30 21:59:42.395357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.774 [2024-09-30 21:59:42.395372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:57.775 [2024-09-30 21:59:42.395381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:57.775 [2024-09-30 21:59:42.395387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.775 [2024-09-30 21:59:42.400167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.775 [2024-09-30 21:59:42.400207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.775 [2024-09-30 21:59:42.400216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:17:57.775 [2024-09-30 21:59:42.400224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.775 [2024-09-30 21:59:42.400334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.775 [2024-09-30 21:59:42.400345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.775 [2024-09-30 21:59:42.400356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:57.775 [2024-09-30 21:59:42.400363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.775 [2024-09-30 21:59:42.400388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.775 [2024-09-30 21:59:42.400398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:57.775 [2024-09-30 21:59:42.400405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:57.775 [2024-09-30 21:59:42.400412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.775 [2024-09-30 21:59:42.400432] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:57.775 [2024-09-30 21:59:42.401726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.775 [2024-09-30 21:59:42.401754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.775 [2024-09-30 21:59:42.401763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.298 ms 00:17:57.775 [2024-09-30 21:59:42.401770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.775 [2024-09-30 21:59:42.401805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.775 [2024-09-30 21:59:42.401813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:57.775 [2024-09-30 21:59:42.401823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:57.775 [2024-09-30 21:59:42.401829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.775 [2024-09-30 21:59:42.401846] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:57.775 [2024-09-30 21:59:42.401867] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:57.775 [2024-09-30 21:59:42.401900] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:57.775 [2024-09-30 21:59:42.401923] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:57.775 [2024-09-30 21:59:42.402024] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:57.775 [2024-09-30 21:59:42.402040] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:57.775 [2024-09-30 21:59:42.402050] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:57.775 [2024-09-30 21:59:42.402060] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402069] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402076] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:57.775 [2024-09-30 21:59:42.402083] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:57.775 [2024-09-30 21:59:42.402094] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:57.775 [2024-09-30 21:59:42.402101] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:57.775 [2024-09-30 21:59:42.402108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.775 [2024-09-30 21:59:42.402119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:57.775 [2024-09-30 21:59:42.402126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:17:57.775 [2024-09-30 21:59:42.402133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.775 [2024-09-30 21:59:42.402230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.775 [2024-09-30 21:59:42.402242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:57.775 [2024-09-30 21:59:42.402249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:57.775 [2024-09-30 21:59:42.402256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.775 [2024-09-30 21:59:42.402357] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:57.775 [2024-09-30 21:59:42.402372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:57.775 [2024-09-30 21:59:42.402383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:57.775 [2024-09-30 21:59:42.402412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:57.775 [2024-09-30 21:59:42.402443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.775 [2024-09-30 21:59:42.402458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:57.775 [2024-09-30 21:59:42.402465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:57.775 [2024-09-30 21:59:42.402473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.775 [2024-09-30 21:59:42.402480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:57.775 [2024-09-30 21:59:42.402488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:57.775 [2024-09-30 21:59:42.402495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:57.775 [2024-09-30 21:59:42.402510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:57.775 [2024-09-30 21:59:42.402534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:57.775 [2024-09-30 21:59:42.402557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:57.775 [2024-09-30 21:59:42.402582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:57.775 [2024-09-30 21:59:42.402604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:57.775 [2024-09-30 21:59:42.402626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.775 [2024-09-30 21:59:42.402640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:57.775 [2024-09-30 21:59:42.402648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:57.775 [2024-09-30 21:59:42.402655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.775 [2024-09-30 21:59:42.402663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:57.775 [2024-09-30 21:59:42.402670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:57.775 [2024-09-30 21:59:42.402677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:57.775 [2024-09-30 21:59:42.402694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:57.775 [2024-09-30 21:59:42.402701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:57.775 [2024-09-30 21:59:42.402718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:57.775 [2024-09-30 21:59:42.402726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.775 [2024-09-30 21:59:42.402742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:57.775 [2024-09-30 21:59:42.402750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:57.775 [2024-09-30 21:59:42.402757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:57.775 [2024-09-30 21:59:42.402766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:57.775 [2024-09-30 21:59:42.402773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:57.775 [2024-09-30 21:59:42.402780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:57.775 [2024-09-30 21:59:42.402787] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:57.775 [2024-09-30 21:59:42.402796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.775 [2024-09-30 21:59:42.402804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:57.775 [2024-09-30 21:59:42.402813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:57.776 [2024-09-30 21:59:42.402821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:57.776 [2024-09-30 21:59:42.402828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:57.776 [2024-09-30 21:59:42.402835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:57.776 [2024-09-30 21:59:42.402841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:57.776 [2024-09-30 21:59:42.402848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:57.776 [2024-09-30 21:59:42.402855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:57.776 [2024-09-30 21:59:42.402861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:57.776 [2024-09-30 21:59:42.402868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:57.776 [2024-09-30 21:59:42.402875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:57.776 [2024-09-30 21:59:42.402881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:57.776 [2024-09-30 21:59:42.402888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:57.776 [2024-09-30 21:59:42.402895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:57.776 [2024-09-30 21:59:42.402902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:57.776 [2024-09-30 21:59:42.402910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.776 [2024-09-30 21:59:42.402918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:57.776 [2024-09-30 21:59:42.402927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:57.776 [2024-09-30 21:59:42.402934] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:57.776 [2024-09-30 21:59:42.402941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:57.776 [2024-09-30 21:59:42.402948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.402957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:57.776 [2024-09-30 21:59:42.402964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:17:57.776 [2024-09-30 21:59:42.402971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.419178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.419229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.776 [2024-09-30 21:59:42.419241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.160 ms 00:17:57.776 [2024-09-30 21:59:42.419249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.419376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.419387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:57.776 [2024-09-30 21:59:42.419400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:17:57.776 [2024-09-30 21:59:42.419407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.428599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.428641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.776 [2024-09-30 21:59:42.428653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.172 ms 00:17:57.776 [2024-09-30 21:59:42.428663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.428716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.428735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.776 [2024-09-30 21:59:42.428747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.776 [2024-09-30 21:59:42.428762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.429109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.429142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.776 [2024-09-30 21:59:42.429154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:17:57.776 [2024-09-30 21:59:42.429164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.429351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.429370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.776 [2024-09-30 21:59:42.429385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:17:57.776 [2024-09-30 21:59:42.429401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.434655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.434686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.776 [2024-09-30 21:59:42.434700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.211 ms 00:17:57.776 [2024-09-30 21:59:42.434707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.437042] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:57.776 [2024-09-30 21:59:42.437075] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:57.776 [2024-09-30 21:59:42.437085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.437093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:57.776 [2024-09-30 21:59:42.437102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.296 ms 00:17:57.776 [2024-09-30 21:59:42.437109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.451388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.451421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:57.776 [2024-09-30 21:59:42.451431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.238 ms 00:17:57.776 [2024-09-30 21:59:42.451439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.453340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.453382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:57.776 [2024-09-30 21:59:42.453393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.828 ms 00:17:57.776 [2024-09-30 21:59:42.453401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.454745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.454775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:57.776 [2024-09-30 21:59:42.454784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.302 ms 00:17:57.776 [2024-09-30 21:59:42.454797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.455104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.455125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:57.776 [2024-09-30 21:59:42.455136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:57.776 [2024-09-30 21:59:42.455143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.470091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.470129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:57.776 [2024-09-30 21:59:42.470140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.928 ms 00:17:57.776 [2024-09-30 21:59:42.470148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.477405] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:57.776 [2024-09-30 21:59:42.491294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.491327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:57.776 [2024-09-30 21:59:42.491338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.085 ms 00:17:57.776 [2024-09-30 21:59:42.491345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.491429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.491439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:57.776 [2024-09-30 21:59:42.491450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:57.776 [2024-09-30 21:59:42.491458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.491503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.491513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:57.776 [2024-09-30 21:59:42.491520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:57.776 [2024-09-30 21:59:42.491527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.491552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.491560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:57.776 [2024-09-30 21:59:42.491574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:57.776 [2024-09-30 21:59:42.491584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.491619] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:57.776 [2024-09-30 21:59:42.491628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.491635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:57.776 [2024-09-30 21:59:42.491642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:57.776 [2024-09-30 21:59:42.491648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.494786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.494820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:57.776 [2024-09-30 21:59:42.494829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.120 ms 00:17:57.776 [2024-09-30 21:59:42.494837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.776 [2024-09-30 21:59:42.494909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.776 [2024-09-30 21:59:42.494919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:57.776 [2024-09-30 21:59:42.494927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:57.777 [2024-09-30 21:59:42.494934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.777 [2024-09-30 21:59:42.495762] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:57.777 [2024-09-30 21:59:42.496743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 106.196 ms, result 0 00:17:57.777 [2024-09-30 21:59:42.497460] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.777 [2024-09-30 21:59:42.507803] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:04.071  Copying: 43/256 [MB] (43 MBps) Copying: 86/256 [MB] (43 MBps) Copying: 129/256 [MB] (43 MBps) Copying: 172/256 [MB] (42 MBps) Copying: 216/256 [MB] (43 MBps) Copying: 256/256 [MB] (average 42 MBps)[2024-09-30 21:59:48.811316] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:04.071 [2024-09-30 21:59:48.813300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.813383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:04.071 [2024-09-30 21:59:48.813412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:04.071 [2024-09-30 21:59:48.813433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.813487] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:04.071 [2024-09-30 21:59:48.814215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.814274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:04.071 [2024-09-30 21:59:48.814300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:18:04.071 [2024-09-30 21:59:48.814321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.817782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.817846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:04.071 [2024-09-30 21:59:48.817885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.406 ms 00:18:04.071 [2024-09-30 21:59:48.817910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.823311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.823332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:04.071 [2024-09-30 21:59:48.823342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.362 ms 00:18:04.071 [2024-09-30 21:59:48.823350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.830332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.830360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:04.071 [2024-09-30 21:59:48.830370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.948 ms 00:18:04.071 [2024-09-30 21:59:48.830386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.832254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.832307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:04.071 [2024-09-30 21:59:48.832319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:18:04.071 [2024-09-30 21:59:48.832327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.835679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.835714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:04.071 [2024-09-30 21:59:48.835723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:18:04.071 [2024-09-30 21:59:48.835731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.835863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.835910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:04.071 [2024-09-30 21:59:48.835919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:04.071 [2024-09-30 21:59:48.835932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.837440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.837473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:04.071 [2024-09-30 21:59:48.837482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.491 ms 00:18:04.071 [2024-09-30 21:59:48.837488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.838542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.838572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:04.071 [2024-09-30 21:59:48.838581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:18:04.071 [2024-09-30 21:59:48.838587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.839461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.839491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:04.071 [2024-09-30 21:59:48.839499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:18:04.071 [2024-09-30 21:59:48.839506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.840358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.071 [2024-09-30 21:59:48.840388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:04.071 [2024-09-30 21:59:48.840396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:18:04.071 [2024-09-30 21:59:48.840403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.071 [2024-09-30 21:59:48.840421] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:04.071 [2024-09-30 21:59:48.840433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:04.071 [2024-09-30 21:59:48.840678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.840998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:04.072 [2024-09-30 21:59:48.841179] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:04.072 [2024-09-30 21:59:48.841200] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 33834573-af61-4e72-baa0-f895c2fd95b0 00:18:04.072 [2024-09-30 21:59:48.841209] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:04.072 [2024-09-30 21:59:48.841216] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:04.072 [2024-09-30 21:59:48.841223] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:04.072 [2024-09-30 21:59:48.841231] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:04.072 [2024-09-30 21:59:48.841238] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:04.072 [2024-09-30 21:59:48.841250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:04.072 [2024-09-30 21:59:48.841259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:04.072 [2024-09-30 21:59:48.841265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:04.072 [2024-09-30 21:59:48.841272] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:04.072 [2024-09-30 21:59:48.841278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.072 [2024-09-30 21:59:48.841286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:04.072 [2024-09-30 21:59:48.841294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.858 ms 00:18:04.072 [2024-09-30 21:59:48.841301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.072 [2024-09-30 21:59:48.842732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.072 [2024-09-30 21:59:48.842756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:04.072 [2024-09-30 21:59:48.842766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.415 ms 00:18:04.072 [2024-09-30 21:59:48.842773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.072 [2024-09-30 21:59:48.842855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:04.072 [2024-09-30 21:59:48.842864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:04.072 [2024-09-30 21:59:48.842872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:04.072 [2024-09-30 21:59:48.842879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.072 [2024-09-30 21:59:48.847666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.072 [2024-09-30 21:59:48.847696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:04.072 [2024-09-30 21:59:48.847705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.072 [2024-09-30 21:59:48.847717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.072 [2024-09-30 21:59:48.847787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.847799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:04.073 [2024-09-30 21:59:48.847807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.847814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.847849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.847858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:04.073 [2024-09-30 21:59:48.847866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.847873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.847891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.847898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:04.073 [2024-09-30 21:59:48.847906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.847913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.856533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.856569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:04.073 [2024-09-30 21:59:48.856579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.856592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.863636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.863674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:04.073 [2024-09-30 21:59:48.863689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.863697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.863741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.863749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:04.073 [2024-09-30 21:59:48.863757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.863764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.863791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.863801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:04.073 [2024-09-30 21:59:48.863808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.863815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.863877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.863891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:04.073 [2024-09-30 21:59:48.863899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.863906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.863934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.863942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:04.073 [2024-09-30 21:59:48.863952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.863959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.863994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.864012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:04.073 [2024-09-30 21:59:48.864020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.864027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.864072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:04.073 [2024-09-30 21:59:48.864090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:04.073 [2024-09-30 21:59:48.864098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:04.073 [2024-09-30 21:59:48.864104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:04.073 [2024-09-30 21:59:48.864241] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 50.957 ms, result 0 00:18:04.331 00:18:04.331 00:18:04.331 21:59:49 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:04.903 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:04.903 21:59:49 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:04.903 21:59:49 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:04.903 21:59:49 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:04.903 21:59:49 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:04.903 21:59:49 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:04.903 21:59:49 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:04.903 21:59:49 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 86766 00:18:04.903 21:59:49 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 86766 ']' 00:18:04.903 21:59:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 86766 00:18:04.903 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86766) - No such process 00:18:04.903 Process with pid 86766 is not found 00:18:04.903 21:59:49 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 86766 is not found' 00:18:04.903 00:18:04.903 real 0m40.071s 00:18:04.903 user 0m54.070s 00:18:04.903 sys 0m4.865s 00:18:04.903 21:59:49 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:04.903 21:59:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 ************************************ 00:18:04.903 END TEST ftl_trim 00:18:04.903 ************************************ 00:18:04.903 21:59:49 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:04.903 21:59:49 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:04.903 21:59:49 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:04.903 21:59:49 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:04.903 ************************************ 00:18:04.903 START TEST ftl_restore 00:18:04.903 ************************************ 00:18:04.903 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:05.162 * Looking for test storage... 00:18:05.162 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.162 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:05.162 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:18:05.162 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:05.162 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.162 21:59:49 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.163 21:59:49 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:05.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.163 --rc genhtml_branch_coverage=1 00:18:05.163 --rc genhtml_function_coverage=1 00:18:05.163 --rc genhtml_legend=1 00:18:05.163 --rc geninfo_all_blocks=1 00:18:05.163 --rc geninfo_unexecuted_blocks=1 00:18:05.163 00:18:05.163 ' 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:05.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.163 --rc genhtml_branch_coverage=1 00:18:05.163 --rc genhtml_function_coverage=1 00:18:05.163 --rc genhtml_legend=1 00:18:05.163 --rc geninfo_all_blocks=1 00:18:05.163 --rc geninfo_unexecuted_blocks=1 00:18:05.163 00:18:05.163 ' 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:05.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.163 --rc genhtml_branch_coverage=1 00:18:05.163 --rc genhtml_function_coverage=1 00:18:05.163 --rc genhtml_legend=1 00:18:05.163 --rc geninfo_all_blocks=1 00:18:05.163 --rc geninfo_unexecuted_blocks=1 00:18:05.163 00:18:05.163 ' 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:05.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.163 --rc genhtml_branch_coverage=1 00:18:05.163 --rc genhtml_function_coverage=1 00:18:05.163 --rc genhtml_legend=1 00:18:05.163 --rc geninfo_all_blocks=1 00:18:05.163 --rc geninfo_unexecuted_blocks=1 00:18:05.163 00:18:05.163 ' 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.pCaq83eEOK 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=86952 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 86952 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 86952 ']' 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:05.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:05.163 21:59:49 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:05.163 21:59:49 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.163 [2024-09-30 21:59:49.924147] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:05.163 [2024-09-30 21:59:49.924287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86952 ] 00:18:05.422 [2024-09-30 21:59:50.053623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:05.422 [2024-09-30 21:59:50.073871] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.422 [2024-09-30 21:59:50.116387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.988 21:59:50 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.988 21:59:50 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:05.988 21:59:50 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:05.988 21:59:50 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:05.988 21:59:50 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:05.988 21:59:50 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:05.988 21:59:50 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:05.988 21:59:50 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:06.246 21:59:51 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:06.246 21:59:51 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:06.246 21:59:51 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:06.246 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:06.246 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:06.246 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:06.246 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:06.246 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:06.505 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:06.505 { 00:18:06.505 "name": "nvme0n1", 00:18:06.505 "aliases": [ 00:18:06.505 "50951cba-edee-4acc-98ae-f16b4604a353" 00:18:06.505 ], 00:18:06.505 "product_name": "NVMe disk", 00:18:06.505 "block_size": 4096, 00:18:06.505 "num_blocks": 1310720, 00:18:06.505 "uuid": "50951cba-edee-4acc-98ae-f16b4604a353", 00:18:06.505 "numa_id": -1, 00:18:06.505 "assigned_rate_limits": { 00:18:06.505 "rw_ios_per_sec": 0, 00:18:06.505 "rw_mbytes_per_sec": 0, 00:18:06.505 "r_mbytes_per_sec": 0, 00:18:06.505 "w_mbytes_per_sec": 0 00:18:06.505 }, 00:18:06.505 "claimed": true, 00:18:06.505 "claim_type": "read_many_write_one", 00:18:06.505 "zoned": false, 00:18:06.505 "supported_io_types": { 00:18:06.505 "read": true, 00:18:06.505 "write": true, 00:18:06.505 "unmap": true, 00:18:06.505 "flush": true, 00:18:06.505 "reset": true, 00:18:06.505 "nvme_admin": true, 00:18:06.505 "nvme_io": true, 00:18:06.505 "nvme_io_md": false, 00:18:06.505 "write_zeroes": true, 00:18:06.505 "zcopy": false, 00:18:06.505 "get_zone_info": false, 00:18:06.505 "zone_management": false, 00:18:06.505 "zone_append": false, 00:18:06.505 "compare": true, 00:18:06.505 "compare_and_write": false, 00:18:06.505 "abort": true, 00:18:06.505 "seek_hole": false, 00:18:06.505 "seek_data": false, 00:18:06.505 "copy": true, 00:18:06.505 "nvme_iov_md": false 00:18:06.505 }, 00:18:06.505 "driver_specific": { 00:18:06.505 "nvme": [ 00:18:06.505 { 00:18:06.505 "pci_address": "0000:00:11.0", 00:18:06.505 "trid": { 00:18:06.505 "trtype": "PCIe", 00:18:06.505 "traddr": "0000:00:11.0" 00:18:06.505 }, 00:18:06.505 "ctrlr_data": { 00:18:06.505 "cntlid": 0, 00:18:06.505 "vendor_id": "0x1b36", 00:18:06.505 "model_number": "QEMU NVMe Ctrl", 00:18:06.505 "serial_number": "12341", 00:18:06.505 "firmware_revision": "8.0.0", 00:18:06.505 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:06.505 "oacs": { 00:18:06.505 "security": 0, 00:18:06.505 "format": 1, 00:18:06.505 "firmware": 0, 00:18:06.505 "ns_manage": 1 00:18:06.505 }, 00:18:06.505 "multi_ctrlr": false, 00:18:06.505 "ana_reporting": false 00:18:06.505 }, 00:18:06.505 "vs": { 00:18:06.505 "nvme_version": "1.4" 00:18:06.505 }, 00:18:06.505 "ns_data": { 00:18:06.505 "id": 1, 00:18:06.505 "can_share": false 00:18:06.505 } 00:18:06.505 } 00:18:06.505 ], 00:18:06.505 "mp_policy": "active_passive" 00:18:06.505 } 00:18:06.505 } 00:18:06.505 ]' 00:18:06.505 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:06.505 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:06.505 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:06.505 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:06.505 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:06.505 21:59:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:06.505 21:59:51 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:06.505 21:59:51 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:06.505 21:59:51 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:06.505 21:59:51 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:06.505 21:59:51 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:06.764 21:59:51 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=e50cf9f8-b914-42c3-a503-07b832f028b6 00:18:06.764 21:59:51 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:06.764 21:59:51 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e50cf9f8-b914-42c3-a503-07b832f028b6 00:18:07.022 21:59:51 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:07.280 21:59:51 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a787b082-6884-49f0-a5f0-29cc6b84d769 00:18:07.280 21:59:51 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a787b082-6884-49f0-a5f0-29cc6b84d769 00:18:07.539 21:59:52 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:07.539 21:59:52 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:07.539 21:59:52 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:07.539 21:59:52 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:07.539 21:59:52 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:07.539 21:59:52 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:07.539 21:59:52 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:07.539 21:59:52 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:07.539 { 00:18:07.539 "name": "ec964576-5e6f-4d2d-bbc6-e21958cb4fa9", 00:18:07.539 "aliases": [ 00:18:07.539 "lvs/nvme0n1p0" 00:18:07.539 ], 00:18:07.539 "product_name": "Logical Volume", 00:18:07.539 "block_size": 4096, 00:18:07.539 "num_blocks": 26476544, 00:18:07.539 "uuid": "ec964576-5e6f-4d2d-bbc6-e21958cb4fa9", 00:18:07.539 "assigned_rate_limits": { 00:18:07.539 "rw_ios_per_sec": 0, 00:18:07.539 "rw_mbytes_per_sec": 0, 00:18:07.539 "r_mbytes_per_sec": 0, 00:18:07.539 "w_mbytes_per_sec": 0 00:18:07.539 }, 00:18:07.539 "claimed": false, 00:18:07.539 "zoned": false, 00:18:07.539 "supported_io_types": { 00:18:07.539 "read": true, 00:18:07.539 "write": true, 00:18:07.539 "unmap": true, 00:18:07.539 "flush": false, 00:18:07.539 "reset": true, 00:18:07.539 "nvme_admin": false, 00:18:07.539 "nvme_io": false, 00:18:07.539 "nvme_io_md": false, 00:18:07.539 "write_zeroes": true, 00:18:07.539 "zcopy": false, 00:18:07.539 "get_zone_info": false, 00:18:07.539 "zone_management": false, 00:18:07.539 "zone_append": false, 00:18:07.539 "compare": false, 00:18:07.539 "compare_and_write": false, 00:18:07.539 "abort": false, 00:18:07.539 "seek_hole": true, 00:18:07.539 "seek_data": true, 00:18:07.539 "copy": false, 00:18:07.539 "nvme_iov_md": false 00:18:07.539 }, 00:18:07.539 "driver_specific": { 00:18:07.539 "lvol": { 00:18:07.539 "lvol_store_uuid": "a787b082-6884-49f0-a5f0-29cc6b84d769", 00:18:07.539 "base_bdev": "nvme0n1", 00:18:07.539 "thin_provision": true, 00:18:07.539 "num_allocated_clusters": 0, 00:18:07.539 "snapshot": false, 00:18:07.539 "clone": false, 00:18:07.539 "esnap_clone": false 00:18:07.539 } 00:18:07.539 } 00:18:07.539 } 00:18:07.539 ]' 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:07.539 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:07.798 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:07.798 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:07.798 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:07.798 21:59:52 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:07.798 21:59:52 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:07.798 21:59:52 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:08.056 21:59:52 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:08.056 21:59:52 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:08.056 21:59:52 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:08.056 { 00:18:08.056 "name": "ec964576-5e6f-4d2d-bbc6-e21958cb4fa9", 00:18:08.056 "aliases": [ 00:18:08.056 "lvs/nvme0n1p0" 00:18:08.056 ], 00:18:08.056 "product_name": "Logical Volume", 00:18:08.056 "block_size": 4096, 00:18:08.056 "num_blocks": 26476544, 00:18:08.056 "uuid": "ec964576-5e6f-4d2d-bbc6-e21958cb4fa9", 00:18:08.056 "assigned_rate_limits": { 00:18:08.056 "rw_ios_per_sec": 0, 00:18:08.056 "rw_mbytes_per_sec": 0, 00:18:08.056 "r_mbytes_per_sec": 0, 00:18:08.056 "w_mbytes_per_sec": 0 00:18:08.056 }, 00:18:08.056 "claimed": false, 00:18:08.056 "zoned": false, 00:18:08.056 "supported_io_types": { 00:18:08.056 "read": true, 00:18:08.056 "write": true, 00:18:08.056 "unmap": true, 00:18:08.056 "flush": false, 00:18:08.056 "reset": true, 00:18:08.056 "nvme_admin": false, 00:18:08.056 "nvme_io": false, 00:18:08.056 "nvme_io_md": false, 00:18:08.056 "write_zeroes": true, 00:18:08.056 "zcopy": false, 00:18:08.056 "get_zone_info": false, 00:18:08.056 "zone_management": false, 00:18:08.056 "zone_append": false, 00:18:08.056 "compare": false, 00:18:08.056 "compare_and_write": false, 00:18:08.056 "abort": false, 00:18:08.056 "seek_hole": true, 00:18:08.056 "seek_data": true, 00:18:08.056 "copy": false, 00:18:08.056 "nvme_iov_md": false 00:18:08.056 }, 00:18:08.056 "driver_specific": { 00:18:08.056 "lvol": { 00:18:08.056 "lvol_store_uuid": "a787b082-6884-49f0-a5f0-29cc6b84d769", 00:18:08.056 "base_bdev": "nvme0n1", 00:18:08.056 "thin_provision": true, 00:18:08.056 "num_allocated_clusters": 0, 00:18:08.056 "snapshot": false, 00:18:08.056 "clone": false, 00:18:08.056 "esnap_clone": false 00:18:08.056 } 00:18:08.056 } 00:18:08.056 } 00:18:08.056 ]' 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:08.056 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:08.314 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:08.314 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:08.314 21:59:52 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:08.314 21:59:52 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:08.314 21:59:52 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:08.314 21:59:53 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:08.314 21:59:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:08.314 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:08.314 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:08.314 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:08.314 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:08.315 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 00:18:08.572 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:08.572 { 00:18:08.572 "name": "ec964576-5e6f-4d2d-bbc6-e21958cb4fa9", 00:18:08.572 "aliases": [ 00:18:08.572 "lvs/nvme0n1p0" 00:18:08.572 ], 00:18:08.572 "product_name": "Logical Volume", 00:18:08.572 "block_size": 4096, 00:18:08.572 "num_blocks": 26476544, 00:18:08.572 "uuid": "ec964576-5e6f-4d2d-bbc6-e21958cb4fa9", 00:18:08.572 "assigned_rate_limits": { 00:18:08.572 "rw_ios_per_sec": 0, 00:18:08.572 "rw_mbytes_per_sec": 0, 00:18:08.572 "r_mbytes_per_sec": 0, 00:18:08.572 "w_mbytes_per_sec": 0 00:18:08.572 }, 00:18:08.572 "claimed": false, 00:18:08.572 "zoned": false, 00:18:08.572 "supported_io_types": { 00:18:08.572 "read": true, 00:18:08.572 "write": true, 00:18:08.572 "unmap": true, 00:18:08.572 "flush": false, 00:18:08.572 "reset": true, 00:18:08.572 "nvme_admin": false, 00:18:08.572 "nvme_io": false, 00:18:08.572 "nvme_io_md": false, 00:18:08.572 "write_zeroes": true, 00:18:08.572 "zcopy": false, 00:18:08.572 "get_zone_info": false, 00:18:08.572 "zone_management": false, 00:18:08.572 "zone_append": false, 00:18:08.572 "compare": false, 00:18:08.572 "compare_and_write": false, 00:18:08.572 "abort": false, 00:18:08.572 "seek_hole": true, 00:18:08.572 "seek_data": true, 00:18:08.572 "copy": false, 00:18:08.572 "nvme_iov_md": false 00:18:08.572 }, 00:18:08.572 "driver_specific": { 00:18:08.572 "lvol": { 00:18:08.572 "lvol_store_uuid": "a787b082-6884-49f0-a5f0-29cc6b84d769", 00:18:08.572 "base_bdev": "nvme0n1", 00:18:08.572 "thin_provision": true, 00:18:08.572 "num_allocated_clusters": 0, 00:18:08.572 "snapshot": false, 00:18:08.572 "clone": false, 00:18:08.572 "esnap_clone": false 00:18:08.572 } 00:18:08.572 } 00:18:08.572 } 00:18:08.572 ]' 00:18:08.572 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:08.572 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:08.572 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:08.572 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:08.572 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:08.572 21:59:53 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:08.572 21:59:53 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:08.572 21:59:53 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 --l2p_dram_limit 10' 00:18:08.572 21:59:53 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:08.572 21:59:53 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:08.572 21:59:53 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:08.572 21:59:53 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:08.572 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:08.572 21:59:53 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ec964576-5e6f-4d2d-bbc6-e21958cb4fa9 --l2p_dram_limit 10 -c nvc0n1p0 00:18:08.831 [2024-09-30 21:59:53.524840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.524880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:08.831 [2024-09-30 21:59:53.524892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:08.831 [2024-09-30 21:59:53.524901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.524942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.524949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.831 [2024-09-30 21:59:53.524958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:08.831 [2024-09-30 21:59:53.524966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.524990] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:08.831 [2024-09-30 21:59:53.525293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:08.831 [2024-09-30 21:59:53.525312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.525320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.831 [2024-09-30 21:59:53.525328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:18:08.831 [2024-09-30 21:59:53.525334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.525386] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 134f5004-12dc-48dc-a030-6aaa397fe814 00:18:08.831 [2024-09-30 21:59:53.526392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.526417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:08.831 [2024-09-30 21:59:53.526425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:08.831 [2024-09-30 21:59:53.526436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.531619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.531649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.831 [2024-09-30 21:59:53.531657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.151 ms 00:18:08.831 [2024-09-30 21:59:53.531666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.531733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.531742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.831 [2024-09-30 21:59:53.531751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:08.831 [2024-09-30 21:59:53.531758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.531793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.531803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:08.831 [2024-09-30 21:59:53.531809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:08.831 [2024-09-30 21:59:53.531816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.531835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:08.831 [2024-09-30 21:59:53.533161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.533205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.831 [2024-09-30 21:59:53.533214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.328 ms 00:18:08.831 [2024-09-30 21:59:53.533220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.533245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.533251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:08.831 [2024-09-30 21:59:53.533261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.831 [2024-09-30 21:59:53.533267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.533281] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:08.831 [2024-09-30 21:59:53.533387] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:08.831 [2024-09-30 21:59:53.533405] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:08.831 [2024-09-30 21:59:53.533414] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:08.831 [2024-09-30 21:59:53.533423] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:08.831 [2024-09-30 21:59:53.533433] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:08.831 [2024-09-30 21:59:53.533446] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:08.831 [2024-09-30 21:59:53.533451] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:08.831 [2024-09-30 21:59:53.533461] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:08.831 [2024-09-30 21:59:53.533467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:08.831 [2024-09-30 21:59:53.533475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.831 [2024-09-30 21:59:53.533480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:08.831 [2024-09-30 21:59:53.533488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:18:08.831 [2024-09-30 21:59:53.533493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.831 [2024-09-30 21:59:53.533562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.832 [2024-09-30 21:59:53.533568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:08.832 [2024-09-30 21:59:53.533575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:08.832 [2024-09-30 21:59:53.533580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.832 [2024-09-30 21:59:53.533654] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:08.832 [2024-09-30 21:59:53.533667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:08.832 [2024-09-30 21:59:53.533676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.832 [2024-09-30 21:59:53.533682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:08.832 [2024-09-30 21:59:53.533694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:08.832 [2024-09-30 21:59:53.533706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:08.832 [2024-09-30 21:59:53.533714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.832 [2024-09-30 21:59:53.533725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:08.832 [2024-09-30 21:59:53.533730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:08.832 [2024-09-30 21:59:53.533737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.832 [2024-09-30 21:59:53.533742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:08.832 [2024-09-30 21:59:53.533749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:08.832 [2024-09-30 21:59:53.533754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:08.832 [2024-09-30 21:59:53.533766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:08.832 [2024-09-30 21:59:53.533772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:08.832 [2024-09-30 21:59:53.533784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.832 [2024-09-30 21:59:53.533796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:08.832 [2024-09-30 21:59:53.533800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.832 [2024-09-30 21:59:53.533812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:08.832 [2024-09-30 21:59:53.533820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.832 [2024-09-30 21:59:53.533835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:08.832 [2024-09-30 21:59:53.533840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.832 [2024-09-30 21:59:53.533853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:08.832 [2024-09-30 21:59:53.533860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.832 [2024-09-30 21:59:53.533874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:08.832 [2024-09-30 21:59:53.533879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:08.832 [2024-09-30 21:59:53.533887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.832 [2024-09-30 21:59:53.533892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:08.832 [2024-09-30 21:59:53.533899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:08.832 [2024-09-30 21:59:53.533905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:08.832 [2024-09-30 21:59:53.533917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:08.832 [2024-09-30 21:59:53.533924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533929] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:08.832 [2024-09-30 21:59:53.533941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:08.832 [2024-09-30 21:59:53.533947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.832 [2024-09-30 21:59:53.533955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.832 [2024-09-30 21:59:53.533961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:08.832 [2024-09-30 21:59:53.533969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:08.832 [2024-09-30 21:59:53.533976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:08.832 [2024-09-30 21:59:53.533984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:08.832 [2024-09-30 21:59:53.533990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:08.832 [2024-09-30 21:59:53.533997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:08.832 [2024-09-30 21:59:53.534005] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:08.832 [2024-09-30 21:59:53.534014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.832 [2024-09-30 21:59:53.534023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:08.832 [2024-09-30 21:59:53.534032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:08.832 [2024-09-30 21:59:53.534038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:08.832 [2024-09-30 21:59:53.534046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:08.832 [2024-09-30 21:59:53.534052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:08.832 [2024-09-30 21:59:53.534061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:08.832 [2024-09-30 21:59:53.534068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:08.832 [2024-09-30 21:59:53.534075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:08.832 [2024-09-30 21:59:53.534081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:08.832 [2024-09-30 21:59:53.534088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:08.832 [2024-09-30 21:59:53.534094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:08.832 [2024-09-30 21:59:53.534102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:08.832 [2024-09-30 21:59:53.534108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:08.832 [2024-09-30 21:59:53.534115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:08.832 [2024-09-30 21:59:53.534122] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:08.832 [2024-09-30 21:59:53.534130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.832 [2024-09-30 21:59:53.534137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:08.832 [2024-09-30 21:59:53.534145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:08.832 [2024-09-30 21:59:53.534151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:08.832 [2024-09-30 21:59:53.534159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:08.832 [2024-09-30 21:59:53.534166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.832 [2024-09-30 21:59:53.534176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:08.832 [2024-09-30 21:59:53.534182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:18:08.832 [2024-09-30 21:59:53.534199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.832 [2024-09-30 21:59:53.534235] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:08.832 [2024-09-30 21:59:53.534247] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:11.360 [2024-09-30 21:59:55.899364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.899422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:11.360 [2024-09-30 21:59:55.899439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2365.121 ms 00:18:11.360 [2024-09-30 21:59:55.899449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.907723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.907764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.360 [2024-09-30 21:59:55.907781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.214 ms 00:18:11.360 [2024-09-30 21:59:55.907793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.907875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.907888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:11.360 [2024-09-30 21:59:55.907897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:11.360 [2024-09-30 21:59:55.907906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.915997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.916034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.360 [2024-09-30 21:59:55.916043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.046 ms 00:18:11.360 [2024-09-30 21:59:55.916061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.916088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.916097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.360 [2024-09-30 21:59:55.916105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:11.360 [2024-09-30 21:59:55.916114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.916454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.916481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.360 [2024-09-30 21:59:55.916490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:11.360 [2024-09-30 21:59:55.916501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.916599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.916613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.360 [2024-09-30 21:59:55.916623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:11.360 [2024-09-30 21:59:55.916633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.930427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.930485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.360 [2024-09-30 21:59:55.930502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.772 ms 00:18:11.360 [2024-09-30 21:59:55.930517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.941871] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:11.360 [2024-09-30 21:59:55.944585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.944615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.360 [2024-09-30 21:59:55.944631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.954 ms 00:18:11.360 [2024-09-30 21:59:55.944639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.996176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.996232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:11.360 [2024-09-30 21:59:55.996251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.508 ms 00:18:11.360 [2024-09-30 21:59:55.996263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.996466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.996479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.360 [2024-09-30 21:59:55.996490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:18:11.360 [2024-09-30 21:59:55.996498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:55.999361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:55.999398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:11.360 [2024-09-30 21:59:55.999414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.839 ms 00:18:11.360 [2024-09-30 21:59:55.999424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.001979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:56.002015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:11.360 [2024-09-30 21:59:56.002027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.512 ms 00:18:11.360 [2024-09-30 21:59:56.002035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.002377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:56.002399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.360 [2024-09-30 21:59:56.002418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:18:11.360 [2024-09-30 21:59:56.002426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.032610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:56.032652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:11.360 [2024-09-30 21:59:56.032667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.156 ms 00:18:11.360 [2024-09-30 21:59:56.032677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.036875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:56.036933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:11.360 [2024-09-30 21:59:56.036952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:18:11.360 [2024-09-30 21:59:56.036964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.040455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:56.040497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:11.360 [2024-09-30 21:59:56.040509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.435 ms 00:18:11.360 [2024-09-30 21:59:56.040516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.043782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:56.043825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.360 [2024-09-30 21:59:56.043845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.228 ms 00:18:11.360 [2024-09-30 21:59:56.043857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.043953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:56.043975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.360 [2024-09-30 21:59:56.043991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:11.360 [2024-09-30 21:59:56.044004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.044089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.360 [2024-09-30 21:59:56.044116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.360 [2024-09-30 21:59:56.044134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:18:11.360 [2024-09-30 21:59:56.044147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.360 [2024-09-30 21:59:56.045069] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2519.840 ms, result 0 00:18:11.360 { 00:18:11.360 "name": "ftl0", 00:18:11.360 "uuid": "134f5004-12dc-48dc-a030-6aaa397fe814" 00:18:11.360 } 00:18:11.360 21:59:56 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:11.360 21:59:56 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:11.619 21:59:56 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:11.619 21:59:56 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:11.879 [2024-09-30 21:59:56.447401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.447447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:11.879 [2024-09-30 21:59:56.447460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:11.879 [2024-09-30 21:59:56.447470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.447494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.879 [2024-09-30 21:59:56.447985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.448027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:11.879 [2024-09-30 21:59:56.448045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:18:11.879 [2024-09-30 21:59:56.448057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.448419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.448449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:11.879 [2024-09-30 21:59:56.448461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:18:11.879 [2024-09-30 21:59:56.448470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.451781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.451813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:11.879 [2024-09-30 21:59:56.451825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.285 ms 00:18:11.879 [2024-09-30 21:59:56.451832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.458561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.458602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:11.879 [2024-09-30 21:59:56.458614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.705 ms 00:18:11.879 [2024-09-30 21:59:56.458621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.460294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.460332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:11.879 [2024-09-30 21:59:56.460348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:18:11.879 [2024-09-30 21:59:56.460359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.463901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.463941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:11.879 [2024-09-30 21:59:56.463953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.498 ms 00:18:11.879 [2024-09-30 21:59:56.463961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.464093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.464115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:11.879 [2024-09-30 21:59:56.464132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:11.879 [2024-09-30 21:59:56.464146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.465982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.466017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:11.879 [2024-09-30 21:59:56.466027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:18:11.879 [2024-09-30 21:59:56.466035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.467462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.467496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:11.879 [2024-09-30 21:59:56.467513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:18:11.879 [2024-09-30 21:59:56.467525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.468767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.879 [2024-09-30 21:59:56.468804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:11.879 [2024-09-30 21:59:56.468814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:18:11.879 [2024-09-30 21:59:56.468821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.879 [2024-09-30 21:59:56.469957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.880 [2024-09-30 21:59:56.469991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:11.880 [2024-09-30 21:59:56.470001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:18:11.880 [2024-09-30 21:59:56.470008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.880 [2024-09-30 21:59:56.470039] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:11.880 [2024-09-30 21:59:56.470053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.470999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:11.880 [2024-09-30 21:59:56.471156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:11.881 [2024-09-30 21:59:56.471336] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:11.881 [2024-09-30 21:59:56.471353] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 134f5004-12dc-48dc-a030-6aaa397fe814 00:18:11.881 [2024-09-30 21:59:56.471365] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:11.881 [2024-09-30 21:59:56.471379] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:11.881 [2024-09-30 21:59:56.471390] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:11.881 [2024-09-30 21:59:56.471404] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:11.881 [2024-09-30 21:59:56.471415] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:11.881 [2024-09-30 21:59:56.471429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:11.881 [2024-09-30 21:59:56.471439] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:11.881 [2024-09-30 21:59:56.471447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:11.881 [2024-09-30 21:59:56.471453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:11.881 [2024-09-30 21:59:56.471462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.881 [2024-09-30 21:59:56.471473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:11.881 [2024-09-30 21:59:56.471487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.423 ms 00:18:11.881 [2024-09-30 21:59:56.471498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.473019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.881 [2024-09-30 21:59:56.473047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:11.881 [2024-09-30 21:59:56.473058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.488 ms 00:18:11.881 [2024-09-30 21:59:56.473065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.473166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.881 [2024-09-30 21:59:56.473205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:11.881 [2024-09-30 21:59:56.473222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:11.881 [2024-09-30 21:59:56.473234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.478595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.478631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.881 [2024-09-30 21:59:56.478642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.478650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.478707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.478719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.881 [2024-09-30 21:59:56.478733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.478745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.478813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.478836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.881 [2024-09-30 21:59:56.478851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.478864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.478897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.478910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.881 [2024-09-30 21:59:56.478923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.478935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.488026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.488066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.881 [2024-09-30 21:59:56.488078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.488086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.495933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.495976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:11.881 [2024-09-30 21:59:56.495987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.495995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.496063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.496073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:11.881 [2024-09-30 21:59:56.496082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.496090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.496151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.496165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:11.881 [2024-09-30 21:59:56.496201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.496213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.496322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.496338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:11.881 [2024-09-30 21:59:56.496353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.496364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.496415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.496426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:11.881 [2024-09-30 21:59:56.496443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.496453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.496507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.496519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:11.881 [2024-09-30 21:59:56.496532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.496543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.496601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:11.881 [2024-09-30 21:59:56.496616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:11.881 [2024-09-30 21:59:56.496635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:11.881 [2024-09-30 21:59:56.496647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.881 [2024-09-30 21:59:56.496803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 49.353 ms, result 0 00:18:11.881 true 00:18:11.881 21:59:56 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 86952 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86952 ']' 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86952 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86952 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.881 killing process with pid 86952 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86952' 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 86952 00:18:11.881 21:59:56 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 86952 00:18:17.145 22:00:01 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:21.357 262144+0 records in 00:18:21.357 262144+0 records out 00:18:21.357 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.2673 s, 252 MB/s 00:18:21.357 22:00:05 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:23.287 22:00:07 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:23.287 [2024-09-30 22:00:07.751968] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:23.287 [2024-09-30 22:00:07.752082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87156 ] 00:18:23.287 [2024-09-30 22:00:07.880514] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:23.287 [2024-09-30 22:00:07.900785] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.287 [2024-09-30 22:00:07.934654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.287 [2024-09-30 22:00:08.022658] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:23.287 [2024-09-30 22:00:08.022725] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:23.546 [2024-09-30 22:00:08.175687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.546 [2024-09-30 22:00:08.175733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:23.546 [2024-09-30 22:00:08.175746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:23.546 [2024-09-30 22:00:08.175759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.546 [2024-09-30 22:00:08.175804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.546 [2024-09-30 22:00:08.175817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:23.546 [2024-09-30 22:00:08.175828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:23.546 [2024-09-30 22:00:08.175838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.546 [2024-09-30 22:00:08.175862] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:23.546 [2024-09-30 22:00:08.176076] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:23.546 [2024-09-30 22:00:08.176101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.546 [2024-09-30 22:00:08.176111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:23.546 [2024-09-30 22:00:08.176119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:18:23.546 [2024-09-30 22:00:08.176132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.546 [2024-09-30 22:00:08.177208] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:23.546 [2024-09-30 22:00:08.179428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.546 [2024-09-30 22:00:08.179462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:23.546 [2024-09-30 22:00:08.179475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.222 ms 00:18:23.546 [2024-09-30 22:00:08.179484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.546 [2024-09-30 22:00:08.179536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.546 [2024-09-30 22:00:08.179546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:23.546 [2024-09-30 22:00:08.179560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:23.546 [2024-09-30 22:00:08.179567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.546 [2024-09-30 22:00:08.184540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.546 [2024-09-30 22:00:08.184573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:23.546 [2024-09-30 22:00:08.184583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.914 ms 00:18:23.547 [2024-09-30 22:00:08.184594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.547 [2024-09-30 22:00:08.184664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.547 [2024-09-30 22:00:08.184676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:23.547 [2024-09-30 22:00:08.184683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:23.547 [2024-09-30 22:00:08.184694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.547 [2024-09-30 22:00:08.184728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.547 [2024-09-30 22:00:08.184737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:23.547 [2024-09-30 22:00:08.184745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:23.547 [2024-09-30 22:00:08.184752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.547 [2024-09-30 22:00:08.184773] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.547 [2024-09-30 22:00:08.186135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.547 [2024-09-30 22:00:08.186174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:23.547 [2024-09-30 22:00:08.186183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:18:23.547 [2024-09-30 22:00:08.186210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.547 [2024-09-30 22:00:08.186238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.547 [2024-09-30 22:00:08.186246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:23.547 [2024-09-30 22:00:08.186254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:23.547 [2024-09-30 22:00:08.186260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.547 [2024-09-30 22:00:08.186291] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:23.547 [2024-09-30 22:00:08.186314] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:23.547 [2024-09-30 22:00:08.186350] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:23.547 [2024-09-30 22:00:08.186364] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:23.547 [2024-09-30 22:00:08.186464] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:23.547 [2024-09-30 22:00:08.186481] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:23.547 [2024-09-30 22:00:08.186495] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:23.547 [2024-09-30 22:00:08.186510] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:23.547 [2024-09-30 22:00:08.186519] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:23.547 [2024-09-30 22:00:08.186527] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:23.547 [2024-09-30 22:00:08.186534] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:23.547 [2024-09-30 22:00:08.186541] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:23.547 [2024-09-30 22:00:08.186552] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:23.547 [2024-09-30 22:00:08.186560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.547 [2024-09-30 22:00:08.186567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:23.547 [2024-09-30 22:00:08.186574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:23.547 [2024-09-30 22:00:08.186581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.547 [2024-09-30 22:00:08.186664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.547 [2024-09-30 22:00:08.186680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:23.547 [2024-09-30 22:00:08.186688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:23.547 [2024-09-30 22:00:08.186696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.547 [2024-09-30 22:00:08.186790] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:23.547 [2024-09-30 22:00:08.186800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:23.547 [2024-09-30 22:00:08.186809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.547 [2024-09-30 22:00:08.186822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.547 [2024-09-30 22:00:08.186831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:23.547 [2024-09-30 22:00:08.186839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:23.547 [2024-09-30 22:00:08.186847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:23.547 [2024-09-30 22:00:08.186855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:23.547 [2024-09-30 22:00:08.186869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:23.547 [2024-09-30 22:00:08.186877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.547 [2024-09-30 22:00:08.186885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:23.547 [2024-09-30 22:00:08.186892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:23.547 [2024-09-30 22:00:08.186901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:23.547 [2024-09-30 22:00:08.186911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:23.547 [2024-09-30 22:00:08.186919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:23.547 [2024-09-30 22:00:08.186926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.547 [2024-09-30 22:00:08.186934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:23.547 [2024-09-30 22:00:08.186941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:23.547 [2024-09-30 22:00:08.186948] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.547 [2024-09-30 22:00:08.186956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:23.547 [2024-09-30 22:00:08.186964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:23.547 [2024-09-30 22:00:08.186971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.547 [2024-09-30 22:00:08.186979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:23.547 [2024-09-30 22:00:08.186987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:23.547 [2024-09-30 22:00:08.186995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.547 [2024-09-30 22:00:08.187002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:23.547 [2024-09-30 22:00:08.187009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:23.547 [2024-09-30 22:00:08.187016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.547 [2024-09-30 22:00:08.187027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:23.547 [2024-09-30 22:00:08.187035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:23.547 [2024-09-30 22:00:08.187042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:23.547 [2024-09-30 22:00:08.187050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:23.547 [2024-09-30 22:00:08.187057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:23.547 [2024-09-30 22:00:08.187065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.547 [2024-09-30 22:00:08.187072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:23.547 [2024-09-30 22:00:08.187079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:23.547 [2024-09-30 22:00:08.187087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:23.547 [2024-09-30 22:00:08.187094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:23.547 [2024-09-30 22:00:08.187102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:23.547 [2024-09-30 22:00:08.187109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.547 [2024-09-30 22:00:08.187117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:23.547 [2024-09-30 22:00:08.187124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:23.547 [2024-09-30 22:00:08.187131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.547 [2024-09-30 22:00:08.187139] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:23.547 [2024-09-30 22:00:08.187149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:23.547 [2024-09-30 22:00:08.187160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:23.547 [2024-09-30 22:00:08.187168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:23.547 [2024-09-30 22:00:08.187175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:23.547 [2024-09-30 22:00:08.187182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:23.547 [2024-09-30 22:00:08.187199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:23.547 [2024-09-30 22:00:08.187207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:23.547 [2024-09-30 22:00:08.187213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:23.547 [2024-09-30 22:00:08.187219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:23.547 [2024-09-30 22:00:08.187227] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:23.547 [2024-09-30 22:00:08.187236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.547 [2024-09-30 22:00:08.187244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:23.547 [2024-09-30 22:00:08.187252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:23.547 [2024-09-30 22:00:08.187259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:23.547 [2024-09-30 22:00:08.187265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:23.547 [2024-09-30 22:00:08.187273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:23.547 [2024-09-30 22:00:08.187282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:23.547 [2024-09-30 22:00:08.187289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:23.547 [2024-09-30 22:00:08.187296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:23.547 [2024-09-30 22:00:08.187302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:23.547 [2024-09-30 22:00:08.187309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:23.547 [2024-09-30 22:00:08.187316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:23.547 [2024-09-30 22:00:08.187323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:23.547 [2024-09-30 22:00:08.187330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:23.547 [2024-09-30 22:00:08.187337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:23.547 [2024-09-30 22:00:08.187343] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:23.547 [2024-09-30 22:00:08.187351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:23.547 [2024-09-30 22:00:08.187359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:23.547 [2024-09-30 22:00:08.187366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:23.547 [2024-09-30 22:00:08.187372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:23.547 [2024-09-30 22:00:08.187380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:23.547 [2024-09-30 22:00:08.187386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.547 [2024-09-30 22:00:08.187395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:23.547 [2024-09-30 22:00:08.187403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:18:23.547 [2024-09-30 22:00:08.187410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.547 [2024-09-30 22:00:08.208404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.547 [2024-09-30 22:00:08.208478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:23.548 [2024-09-30 22:00:08.208512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.948 ms 00:18:23.548 [2024-09-30 22:00:08.208530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.208728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.208759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:23.548 [2024-09-30 22:00:08.208777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:18:23.548 [2024-09-30 22:00:08.208793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.219361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.219395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:23.548 [2024-09-30 22:00:08.219405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.460 ms 00:18:23.548 [2024-09-30 22:00:08.219413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.219445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.219458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:23.548 [2024-09-30 22:00:08.219466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:23.548 [2024-09-30 22:00:08.219473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.219806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.219835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:23.548 [2024-09-30 22:00:08.219844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:18:23.548 [2024-09-30 22:00:08.219855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.219969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.219983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:23.548 [2024-09-30 22:00:08.219992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:23.548 [2024-09-30 22:00:08.220000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.224567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.224601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:23.548 [2024-09-30 22:00:08.224610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.546 ms 00:18:23.548 [2024-09-30 22:00:08.224617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.226885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:23.548 [2024-09-30 22:00:08.226920] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:23.548 [2024-09-30 22:00:08.226938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.226946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:23.548 [2024-09-30 22:00:08.226954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.243 ms 00:18:23.548 [2024-09-30 22:00:08.226966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.241382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.241419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:23.548 [2024-09-30 22:00:08.241436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.334 ms 00:18:23.548 [2024-09-30 22:00:08.241444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.243346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.243385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:23.548 [2024-09-30 22:00:08.243396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.881 ms 00:18:23.548 [2024-09-30 22:00:08.243404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.244818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.244852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:23.548 [2024-09-30 22:00:08.244862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.376 ms 00:18:23.548 [2024-09-30 22:00:08.244871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.245181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.245215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:23.548 [2024-09-30 22:00:08.245224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:18:23.548 [2024-09-30 22:00:08.245231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.260927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.260969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:23.548 [2024-09-30 22:00:08.260984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.680 ms 00:18:23.548 [2024-09-30 22:00:08.260992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.268327] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:23.548 [2024-09-30 22:00:08.270803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.270834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:23.548 [2024-09-30 22:00:08.270849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.775 ms 00:18:23.548 [2024-09-30 22:00:08.270860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.270910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.270922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:23.548 [2024-09-30 22:00:08.270934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:23.548 [2024-09-30 22:00:08.270942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.271020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.271031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:23.548 [2024-09-30 22:00:08.271039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:23.548 [2024-09-30 22:00:08.271048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.271067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.271074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:23.548 [2024-09-30 22:00:08.271082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:23.548 [2024-09-30 22:00:08.271089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.271119] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:23.548 [2024-09-30 22:00:08.271128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.271136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:23.548 [2024-09-30 22:00:08.271146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:23.548 [2024-09-30 22:00:08.271153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.274248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.274281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:23.548 [2024-09-30 22:00:08.274290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:18:23.548 [2024-09-30 22:00:08.274297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.274374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:23.548 [2024-09-30 22:00:08.274383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:23.548 [2024-09-30 22:00:08.274391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:23.548 [2024-09-30 22:00:08.274398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:23.548 [2024-09-30 22:00:08.275243] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 99.164 ms, result 0 00:18:45.439  Copying: 44/1024 [MB] (44 MBps) Copying: 90/1024 [MB] (46 MBps) Copying: 142/1024 [MB] (51 MBps) Copying: 188/1024 [MB] (46 MBps) Copying: 234/1024 [MB] (45 MBps) Copying: 279/1024 [MB] (45 MBps) Copying: 324/1024 [MB] (45 MBps) Copying: 362/1024 [MB] (38 MBps) Copying: 407/1024 [MB] (45 MBps) Copying: 454/1024 [MB] (46 MBps) Copying: 500/1024 [MB] (45 MBps) Copying: 552/1024 [MB] (52 MBps) Copying: 605/1024 [MB] (53 MBps) Copying: 655/1024 [MB] (49 MBps) Copying: 700/1024 [MB] (45 MBps) Copying: 746/1024 [MB] (46 MBps) Copying: 792/1024 [MB] (45 MBps) Copying: 837/1024 [MB] (44 MBps) Copying: 881/1024 [MB] (43 MBps) Copying: 928/1024 [MB] (47 MBps) Copying: 981/1024 [MB] (53 MBps) Copying: 1024/1024 [MB] (average 46 MBps)[2024-09-30 22:00:30.224425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.224468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:45.439 [2024-09-30 22:00:30.224480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:45.439 [2024-09-30 22:00:30.224489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.439 [2024-09-30 22:00:30.224511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:45.439 [2024-09-30 22:00:30.224946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.224970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:45.439 [2024-09-30 22:00:30.224979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:18:45.439 [2024-09-30 22:00:30.224986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.439 [2024-09-30 22:00:30.226478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.226516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:45.439 [2024-09-30 22:00:30.226525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:18:45.439 [2024-09-30 22:00:30.226532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.439 [2024-09-30 22:00:30.238757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.238804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.439 [2024-09-30 22:00:30.238814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.209 ms 00:18:45.439 [2024-09-30 22:00:30.238821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.439 [2024-09-30 22:00:30.244967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.245002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:45.439 [2024-09-30 22:00:30.245018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:18:45.439 [2024-09-30 22:00:30.245030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.439 [2024-09-30 22:00:30.246116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.246148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.439 [2024-09-30 22:00:30.246157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:18:45.439 [2024-09-30 22:00:30.246164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.439 [2024-09-30 22:00:30.249579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.249614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.439 [2024-09-30 22:00:30.249623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:18:45.439 [2024-09-30 22:00:30.249630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.439 [2024-09-30 22:00:30.249723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.249732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.439 [2024-09-30 22:00:30.249739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:45.439 [2024-09-30 22:00:30.249747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.439 [2024-09-30 22:00:30.251418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.439 [2024-09-30 22:00:30.251448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:45.439 [2024-09-30 22:00:30.251456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:18:45.439 [2024-09-30 22:00:30.251463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.700 [2024-09-30 22:00:30.252688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.700 [2024-09-30 22:00:30.252720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:45.700 [2024-09-30 22:00:30.252728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:18:45.700 [2024-09-30 22:00:30.252735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.700 [2024-09-30 22:00:30.253632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.700 [2024-09-30 22:00:30.253663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.700 [2024-09-30 22:00:30.253681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:18:45.700 [2024-09-30 22:00:30.253687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.700 [2024-09-30 22:00:30.254670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.700 [2024-09-30 22:00:30.254700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.700 [2024-09-30 22:00:30.254709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:18:45.700 [2024-09-30 22:00:30.254717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.700 [2024-09-30 22:00:30.254744] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.700 [2024-09-30 22:00:30.254763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.254995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:45.700 [2024-09-30 22:00:30.255215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:45.701 [2024-09-30 22:00:30.255533] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:45.701 [2024-09-30 22:00:30.255547] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 134f5004-12dc-48dc-a030-6aaa397fe814 00:18:45.701 [2024-09-30 22:00:30.255554] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:45.701 [2024-09-30 22:00:30.255564] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:45.701 [2024-09-30 22:00:30.255571] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:45.701 [2024-09-30 22:00:30.255579] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:45.701 [2024-09-30 22:00:30.255585] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:45.701 [2024-09-30 22:00:30.255593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:45.701 [2024-09-30 22:00:30.255599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:45.701 [2024-09-30 22:00:30.255606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:45.701 [2024-09-30 22:00:30.255612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:45.701 [2024-09-30 22:00:30.255618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.701 [2024-09-30 22:00:30.255628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:45.701 [2024-09-30 22:00:30.255638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:18:45.701 [2024-09-30 22:00:30.255644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.257087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.701 [2024-09-30 22:00:30.257112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:45.701 [2024-09-30 22:00:30.257120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:18:45.701 [2024-09-30 22:00:30.257127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.257218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.701 [2024-09-30 22:00:30.257231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:45.701 [2024-09-30 22:00:30.257245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:18:45.701 [2024-09-30 22:00:30.257252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.261714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.261745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:45.701 [2024-09-30 22:00:30.261753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.261760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.261811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.261821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:45.701 [2024-09-30 22:00:30.261829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.261840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.261884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.261893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:45.701 [2024-09-30 22:00:30.261900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.261907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.261921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.261928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:45.701 [2024-09-30 22:00:30.261938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.261944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.270724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.270761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:45.701 [2024-09-30 22:00:30.270771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.270779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.277888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.277926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:45.701 [2024-09-30 22:00:30.277940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.277948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.277970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.277978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:45.701 [2024-09-30 22:00:30.277985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.277992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.278052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.278061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:45.701 [2024-09-30 22:00:30.278069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.278079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.278135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.701 [2024-09-30 22:00:30.278151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:45.701 [2024-09-30 22:00:30.278159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.701 [2024-09-30 22:00:30.278166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.701 [2024-09-30 22:00:30.278206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.702 [2024-09-30 22:00:30.278215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:45.702 [2024-09-30 22:00:30.278222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.702 [2024-09-30 22:00:30.278230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.702 [2024-09-30 22:00:30.278263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.702 [2024-09-30 22:00:30.278271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:45.702 [2024-09-30 22:00:30.278279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.702 [2024-09-30 22:00:30.278286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.702 [2024-09-30 22:00:30.278324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:45.702 [2024-09-30 22:00:30.278337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:45.702 [2024-09-30 22:00:30.278349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:45.702 [2024-09-30 22:00:30.278358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.702 [2024-09-30 22:00:30.278461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.013 ms, result 0 00:18:46.269 00:18:46.269 00:18:46.269 22:00:30 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:46.269 [2024-09-30 22:00:30.928388] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:18:46.269 [2024-09-30 22:00:30.928504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87406 ] 00:18:46.269 [2024-09-30 22:00:31.057035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:46.269 [2024-09-30 22:00:31.077517] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.528 [2024-09-30 22:00:31.111819] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.528 [2024-09-30 22:00:31.200062] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.528 [2024-09-30 22:00:31.200121] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:46.788 [2024-09-30 22:00:31.352414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.352455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:46.788 [2024-09-30 22:00:31.352467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.788 [2024-09-30 22:00:31.352475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.352518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.352528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.788 [2024-09-30 22:00:31.352537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:46.788 [2024-09-30 22:00:31.352544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.352565] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:46.788 [2024-09-30 22:00:31.352868] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:46.788 [2024-09-30 22:00:31.352887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.352897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.788 [2024-09-30 22:00:31.352905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:18:46.788 [2024-09-30 22:00:31.352915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.353977] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:46.788 [2024-09-30 22:00:31.356292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.356320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:46.788 [2024-09-30 22:00:31.356333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.317 ms 00:18:46.788 [2024-09-30 22:00:31.356341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.356390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.356399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:46.788 [2024-09-30 22:00:31.356409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:46.788 [2024-09-30 22:00:31.356416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.361367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.361394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.788 [2024-09-30 22:00:31.361408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.894 ms 00:18:46.788 [2024-09-30 22:00:31.361419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.361485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.361493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.788 [2024-09-30 22:00:31.361501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:46.788 [2024-09-30 22:00:31.361508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.361543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.361553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:46.788 [2024-09-30 22:00:31.361561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:46.788 [2024-09-30 22:00:31.361574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.361597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:46.788 [2024-09-30 22:00:31.362916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.362940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.788 [2024-09-30 22:00:31.362948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.326 ms 00:18:46.788 [2024-09-30 22:00:31.362960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.362990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.788 [2024-09-30 22:00:31.362998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:46.788 [2024-09-30 22:00:31.363006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:46.788 [2024-09-30 22:00:31.363013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.788 [2024-09-30 22:00:31.363042] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:46.788 [2024-09-30 22:00:31.363060] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:46.788 [2024-09-30 22:00:31.363093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:46.788 [2024-09-30 22:00:31.363111] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:46.789 [2024-09-30 22:00:31.363229] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:46.789 [2024-09-30 22:00:31.363244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:46.789 [2024-09-30 22:00:31.363254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:46.789 [2024-09-30 22:00:31.363266] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363275] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363283] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:46.789 [2024-09-30 22:00:31.363290] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:46.789 [2024-09-30 22:00:31.363300] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:46.789 [2024-09-30 22:00:31.363307] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:46.789 [2024-09-30 22:00:31.363314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.789 [2024-09-30 22:00:31.363321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:46.789 [2024-09-30 22:00:31.363329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:46.789 [2024-09-30 22:00:31.363335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.789 [2024-09-30 22:00:31.363422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.789 [2024-09-30 22:00:31.363431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:46.789 [2024-09-30 22:00:31.363438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:46.789 [2024-09-30 22:00:31.363445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.789 [2024-09-30 22:00:31.363541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:46.789 [2024-09-30 22:00:31.363550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:46.789 [2024-09-30 22:00:31.363558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:46.789 [2024-09-30 22:00:31.363579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:46.789 [2024-09-30 22:00:31.363605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.789 [2024-09-30 22:00:31.363618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:46.789 [2024-09-30 22:00:31.363624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:46.789 [2024-09-30 22:00:31.363633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:46.789 [2024-09-30 22:00:31.363641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:46.789 [2024-09-30 22:00:31.363648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:46.789 [2024-09-30 22:00:31.363654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:46.789 [2024-09-30 22:00:31.363667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:46.789 [2024-09-30 22:00:31.363688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:46.789 [2024-09-30 22:00:31.363707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:46.789 [2024-09-30 22:00:31.363726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:46.789 [2024-09-30 22:00:31.363751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:46.789 [2024-09-30 22:00:31.363771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.789 [2024-09-30 22:00:31.363784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:46.789 [2024-09-30 22:00:31.363790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:46.789 [2024-09-30 22:00:31.363796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:46.789 [2024-09-30 22:00:31.363803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:46.789 [2024-09-30 22:00:31.363810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:46.789 [2024-09-30 22:00:31.363816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:46.789 [2024-09-30 22:00:31.363829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:46.789 [2024-09-30 22:00:31.363835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363841] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:46.789 [2024-09-30 22:00:31.363854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:46.789 [2024-09-30 22:00:31.363864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:46.789 [2024-09-30 22:00:31.363879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:46.789 [2024-09-30 22:00:31.363886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:46.789 [2024-09-30 22:00:31.363892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:46.789 [2024-09-30 22:00:31.363899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:46.789 [2024-09-30 22:00:31.363905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:46.789 [2024-09-30 22:00:31.363912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:46.789 [2024-09-30 22:00:31.363920] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:46.789 [2024-09-30 22:00:31.363933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.789 [2024-09-30 22:00:31.363941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:46.789 [2024-09-30 22:00:31.363949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:46.789 [2024-09-30 22:00:31.363955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:46.789 [2024-09-30 22:00:31.363962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:46.789 [2024-09-30 22:00:31.363969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:46.789 [2024-09-30 22:00:31.363977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:46.789 [2024-09-30 22:00:31.363984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:46.789 [2024-09-30 22:00:31.363992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:46.789 [2024-09-30 22:00:31.363998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:46.789 [2024-09-30 22:00:31.364005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:46.789 [2024-09-30 22:00:31.364012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:46.789 [2024-09-30 22:00:31.364019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:46.790 [2024-09-30 22:00:31.364026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:46.790 [2024-09-30 22:00:31.364033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:46.790 [2024-09-30 22:00:31.364040] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:46.790 [2024-09-30 22:00:31.364051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:46.790 [2024-09-30 22:00:31.364061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:46.790 [2024-09-30 22:00:31.364068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:46.790 [2024-09-30 22:00:31.364075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:46.790 [2024-09-30 22:00:31.364083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:46.790 [2024-09-30 22:00:31.364090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.364099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:46.790 [2024-09-30 22:00:31.364107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:18:46.790 [2024-09-30 22:00:31.364114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.380267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.380302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.790 [2024-09-30 22:00:31.380318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.111 ms 00:18:46.790 [2024-09-30 22:00:31.380326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.380412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.380421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:46.790 [2024-09-30 22:00:31.380429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:46.790 [2024-09-30 22:00:31.380437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.389423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.389460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.790 [2024-09-30 22:00:31.389472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.932 ms 00:18:46.790 [2024-09-30 22:00:31.389488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.389521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.389532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.790 [2024-09-30 22:00:31.389547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:46.790 [2024-09-30 22:00:31.389555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.389921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.389948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.790 [2024-09-30 22:00:31.389959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:18:46.790 [2024-09-30 22:00:31.389968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.390109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.390121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.790 [2024-09-30 22:00:31.390132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:18:46.790 [2024-09-30 22:00:31.390142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.394948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.394977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.790 [2024-09-30 22:00:31.394986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.782 ms 00:18:46.790 [2024-09-30 22:00:31.394993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.397307] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:46.790 [2024-09-30 22:00:31.397343] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:46.790 [2024-09-30 22:00:31.397356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.397364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:46.790 [2024-09-30 22:00:31.397377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:18:46.790 [2024-09-30 22:00:31.397384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.411892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.411934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:46.790 [2024-09-30 22:00:31.411949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.467 ms 00:18:46.790 [2024-09-30 22:00:31.411957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.413824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.413853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:46.790 [2024-09-30 22:00:31.413862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:18:46.790 [2024-09-30 22:00:31.413869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.415237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.415261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:46.790 [2024-09-30 22:00:31.415270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.336 ms 00:18:46.790 [2024-09-30 22:00:31.415277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.415585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.415601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:46.790 [2024-09-30 22:00:31.415609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:46.790 [2024-09-30 22:00:31.415616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.431361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.431401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:46.790 [2024-09-30 22:00:31.431411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.722 ms 00:18:46.790 [2024-09-30 22:00:31.431423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.438768] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:46.790 [2024-09-30 22:00:31.441098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.441128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:46.790 [2024-09-30 22:00:31.441138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.639 ms 00:18:46.790 [2024-09-30 22:00:31.441146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.441224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.441237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:46.790 [2024-09-30 22:00:31.441247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:46.790 [2024-09-30 22:00:31.441255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.441327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.441337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:46.790 [2024-09-30 22:00:31.441346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:46.790 [2024-09-30 22:00:31.441357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.441376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.441383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:46.790 [2024-09-30 22:00:31.441395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:46.790 [2024-09-30 22:00:31.441402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.790 [2024-09-30 22:00:31.441431] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:46.790 [2024-09-30 22:00:31.441441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.790 [2024-09-30 22:00:31.441452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:46.790 [2024-09-30 22:00:31.441464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:46.790 [2024-09-30 22:00:31.441474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.791 [2024-09-30 22:00:31.444893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.791 [2024-09-30 22:00:31.444923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:46.791 [2024-09-30 22:00:31.444932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.401 ms 00:18:46.791 [2024-09-30 22:00:31.444940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.791 [2024-09-30 22:00:31.445009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.791 [2024-09-30 22:00:31.445021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:46.791 [2024-09-30 22:00:31.445030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:46.791 [2024-09-30 22:00:31.445037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.791 [2024-09-30 22:00:31.445897] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 93.117 ms, result 0 00:19:08.076  Copying: 47/1024 [MB] (47 MBps) Copying: 96/1024 [MB] (48 MBps) Copying: 147/1024 [MB] (51 MBps) Copying: 195/1024 [MB] (48 MBps) Copying: 243/1024 [MB] (48 MBps) Copying: 295/1024 [MB] (51 MBps) Copying: 344/1024 [MB] (49 MBps) Copying: 394/1024 [MB] (50 MBps) Copying: 444/1024 [MB] (49 MBps) Copying: 492/1024 [MB] (48 MBps) Copying: 540/1024 [MB] (48 MBps) Copying: 591/1024 [MB] (51 MBps) Copying: 641/1024 [MB] (49 MBps) Copying: 690/1024 [MB] (49 MBps) Copying: 738/1024 [MB] (48 MBps) Copying: 789/1024 [MB] (50 MBps) Copying: 834/1024 [MB] (45 MBps) Copying: 883/1024 [MB] (48 MBps) Copying: 931/1024 [MB] (47 MBps) Copying: 977/1024 [MB] (46 MBps) Copying: 1021/1024 [MB] (43 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-30 22:00:52.878128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.076 [2024-09-30 22:00:52.878232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:08.076 [2024-09-30 22:00:52.878259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:08.076 [2024-09-30 22:00:52.878284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.076 [2024-09-30 22:00:52.878327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:08.076 [2024-09-30 22:00:52.878905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.076 [2024-09-30 22:00:52.878928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:08.076 [2024-09-30 22:00:52.878940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:19:08.076 [2024-09-30 22:00:52.878959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.076 [2024-09-30 22:00:52.879242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.076 [2024-09-30 22:00:52.879256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:08.076 [2024-09-30 22:00:52.879273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:19:08.076 [2024-09-30 22:00:52.879283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.076 [2024-09-30 22:00:52.883757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.076 [2024-09-30 22:00:52.883785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:08.076 [2024-09-30 22:00:52.883797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.453 ms 00:19:08.076 [2024-09-30 22:00:52.883807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.890236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.336 [2024-09-30 22:00:52.890265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:08.336 [2024-09-30 22:00:52.890275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.409 ms 00:19:08.336 [2024-09-30 22:00:52.890283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.891621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.336 [2024-09-30 22:00:52.891654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:08.336 [2024-09-30 22:00:52.891663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.283 ms 00:19:08.336 [2024-09-30 22:00:52.891670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.894624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.336 [2024-09-30 22:00:52.894659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:08.336 [2024-09-30 22:00:52.894668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:19:08.336 [2024-09-30 22:00:52.894676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.894781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.336 [2024-09-30 22:00:52.894791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:08.336 [2024-09-30 22:00:52.894799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:08.336 [2024-09-30 22:00:52.894806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.896662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.336 [2024-09-30 22:00:52.896693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:08.336 [2024-09-30 22:00:52.896701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.837 ms 00:19:08.336 [2024-09-30 22:00:52.896708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.897745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.336 [2024-09-30 22:00:52.897777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:08.336 [2024-09-30 22:00:52.897786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:19:08.336 [2024-09-30 22:00:52.897793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.898790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.336 [2024-09-30 22:00:52.898830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:08.336 [2024-09-30 22:00:52.898840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:19:08.336 [2024-09-30 22:00:52.898848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.900045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.336 [2024-09-30 22:00:52.900078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:08.336 [2024-09-30 22:00:52.900086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:19:08.336 [2024-09-30 22:00:52.900094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.336 [2024-09-30 22:00:52.900109] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:08.336 [2024-09-30 22:00:52.900129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:08.336 [2024-09-30 22:00:52.900624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:08.337 [2024-09-30 22:00:52.900920] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:08.337 [2024-09-30 22:00:52.900928] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 134f5004-12dc-48dc-a030-6aaa397fe814 00:19:08.337 [2024-09-30 22:00:52.900936] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:08.337 [2024-09-30 22:00:52.900943] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:08.337 [2024-09-30 22:00:52.900951] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:08.337 [2024-09-30 22:00:52.900958] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:08.337 [2024-09-30 22:00:52.900964] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:08.337 [2024-09-30 22:00:52.900971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:08.337 [2024-09-30 22:00:52.900982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:08.337 [2024-09-30 22:00:52.900989] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:08.337 [2024-09-30 22:00:52.900996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:08.337 [2024-09-30 22:00:52.901002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.337 [2024-09-30 22:00:52.901012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:08.337 [2024-09-30 22:00:52.901020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.894 ms 00:19:08.337 [2024-09-30 22:00:52.901028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.902405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.337 [2024-09-30 22:00:52.902429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:08.337 [2024-09-30 22:00:52.902438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.363 ms 00:19:08.337 [2024-09-30 22:00:52.902445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.902533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.337 [2024-09-30 22:00:52.902546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:08.337 [2024-09-30 22:00:52.902556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:08.337 [2024-09-30 22:00:52.902564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.906934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.906964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:08.337 [2024-09-30 22:00:52.906974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.906982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.907034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.907043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:08.337 [2024-09-30 22:00:52.907051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.907058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.907091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.907101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:08.337 [2024-09-30 22:00:52.907109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.907121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.907135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.907149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:08.337 [2024-09-30 22:00:52.907157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.907164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.915783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.915816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:08.337 [2024-09-30 22:00:52.915826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.915833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.922668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.922705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:08.337 [2024-09-30 22:00:52.922714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.922722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.922759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.922768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:08.337 [2024-09-30 22:00:52.922775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.922783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.922807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.922819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:08.337 [2024-09-30 22:00:52.922828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.922835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.922897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.922907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:08.337 [2024-09-30 22:00:52.922916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.922923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.922948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.922957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:08.337 [2024-09-30 22:00:52.922965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.922975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.337 [2024-09-30 22:00:52.923006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.337 [2024-09-30 22:00:52.923014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:08.337 [2024-09-30 22:00:52.923023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.337 [2024-09-30 22:00:52.923030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-09-30 22:00:52.923068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:08.338 [2024-09-30 22:00:52.923078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:08.338 [2024-09-30 22:00:52.923088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:08.338 [2024-09-30 22:00:52.923099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.338 [2024-09-30 22:00:52.923277] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 45.071 ms, result 0 00:19:08.338 00:19:08.338 00:19:08.338 22:00:53 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:10.865 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:10.865 22:00:55 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:10.865 [2024-09-30 22:00:55.190728] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:19:10.865 [2024-09-30 22:00:55.190971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87658 ] 00:19:10.865 [2024-09-30 22:00:55.319055] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:10.865 [2024-09-30 22:00:55.340162] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.865 [2024-09-30 22:00:55.373481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.865 [2024-09-30 22:00:55.460770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:10.865 [2024-09-30 22:00:55.460833] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:10.865 [2024-09-30 22:00:55.614042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.614087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:10.865 [2024-09-30 22:00:55.614100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.865 [2024-09-30 22:00:55.614108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.614152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.614163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.865 [2024-09-30 22:00:55.614171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:10.865 [2024-09-30 22:00:55.614184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.614214] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:10.865 [2024-09-30 22:00:55.614522] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:10.865 [2024-09-30 22:00:55.614550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.614560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.865 [2024-09-30 22:00:55.614568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:19:10.865 [2024-09-30 22:00:55.614577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.615689] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:10.865 [2024-09-30 22:00:55.617723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.617756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:10.865 [2024-09-30 22:00:55.617774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.035 ms 00:19:10.865 [2024-09-30 22:00:55.617782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.617829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.617840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:10.865 [2024-09-30 22:00:55.617850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:10.865 [2024-09-30 22:00:55.617857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.622705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.622733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.865 [2024-09-30 22:00:55.622742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.795 ms 00:19:10.865 [2024-09-30 22:00:55.622754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.622821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.622830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.865 [2024-09-30 22:00:55.622838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:10.865 [2024-09-30 22:00:55.622846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.622884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.622894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:10.865 [2024-09-30 22:00:55.622906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:10.865 [2024-09-30 22:00:55.622913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.622934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:10.865 [2024-09-30 22:00:55.624230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.624256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.865 [2024-09-30 22:00:55.624265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.299 ms 00:19:10.865 [2024-09-30 22:00:55.624272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.624299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.624308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:10.865 [2024-09-30 22:00:55.624316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:10.865 [2024-09-30 22:00:55.624323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.624354] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:10.865 [2024-09-30 22:00:55.624373] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:10.865 [2024-09-30 22:00:55.624407] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:10.865 [2024-09-30 22:00:55.624423] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:10.865 [2024-09-30 22:00:55.624524] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:10.865 [2024-09-30 22:00:55.624536] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:10.865 [2024-09-30 22:00:55.624546] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:10.865 [2024-09-30 22:00:55.624560] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:10.865 [2024-09-30 22:00:55.624569] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:10.865 [2024-09-30 22:00:55.624578] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:10.865 [2024-09-30 22:00:55.624585] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:10.865 [2024-09-30 22:00:55.624592] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:10.865 [2024-09-30 22:00:55.624599] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:10.865 [2024-09-30 22:00:55.624606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.624613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:10.865 [2024-09-30 22:00:55.624620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:19:10.865 [2024-09-30 22:00:55.624628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.624712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.865 [2024-09-30 22:00:55.624722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:10.865 [2024-09-30 22:00:55.624729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:10.865 [2024-09-30 22:00:55.624740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.865 [2024-09-30 22:00:55.624836] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:10.865 [2024-09-30 22:00:55.624853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:10.865 [2024-09-30 22:00:55.624867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.865 [2024-09-30 22:00:55.624878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.865 [2024-09-30 22:00:55.624888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:10.866 [2024-09-30 22:00:55.624895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:10.866 [2024-09-30 22:00:55.624903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:10.866 [2024-09-30 22:00:55.624911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:10.866 [2024-09-30 22:00:55.624925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:10.866 [2024-09-30 22:00:55.624934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.866 [2024-09-30 22:00:55.624941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:10.866 [2024-09-30 22:00:55.624949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:10.866 [2024-09-30 22:00:55.624959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.866 [2024-09-30 22:00:55.624967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:10.866 [2024-09-30 22:00:55.624975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:10.866 [2024-09-30 22:00:55.624983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.866 [2024-09-30 22:00:55.624990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:10.866 [2024-09-30 22:00:55.624998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:10.866 [2024-09-30 22:00:55.625006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:10.866 [2024-09-30 22:00:55.625021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.866 [2024-09-30 22:00:55.625036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:10.866 [2024-09-30 22:00:55.625044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.866 [2024-09-30 22:00:55.625059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:10.866 [2024-09-30 22:00:55.625066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.866 [2024-09-30 22:00:55.625085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:10.866 [2024-09-30 22:00:55.625093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.866 [2024-09-30 22:00:55.625108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:10.866 [2024-09-30 22:00:55.625116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.866 [2024-09-30 22:00:55.625130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:10.866 [2024-09-30 22:00:55.625137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:10.866 [2024-09-30 22:00:55.625145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.866 [2024-09-30 22:00:55.625153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:10.866 [2024-09-30 22:00:55.625160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:10.866 [2024-09-30 22:00:55.625168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:10.866 [2024-09-30 22:00:55.625183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:10.866 [2024-09-30 22:00:55.625204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625211] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:10.866 [2024-09-30 22:00:55.625223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:10.866 [2024-09-30 22:00:55.625238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.866 [2024-09-30 22:00:55.625245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.866 [2024-09-30 22:00:55.625254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:10.866 [2024-09-30 22:00:55.625261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:10.866 [2024-09-30 22:00:55.625269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:10.866 [2024-09-30 22:00:55.625277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:10.866 [2024-09-30 22:00:55.625284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:10.866 [2024-09-30 22:00:55.625292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:10.866 [2024-09-30 22:00:55.625301] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:10.866 [2024-09-30 22:00:55.625310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.866 [2024-09-30 22:00:55.625319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:10.866 [2024-09-30 22:00:55.625325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:10.866 [2024-09-30 22:00:55.625332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:10.866 [2024-09-30 22:00:55.625339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:10.866 [2024-09-30 22:00:55.625346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:10.866 [2024-09-30 22:00:55.625355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:10.866 [2024-09-30 22:00:55.625362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:10.866 [2024-09-30 22:00:55.625369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:10.866 [2024-09-30 22:00:55.625376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:10.866 [2024-09-30 22:00:55.625383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:10.866 [2024-09-30 22:00:55.625390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:10.866 [2024-09-30 22:00:55.625396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:10.866 [2024-09-30 22:00:55.625404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:10.866 [2024-09-30 22:00:55.625411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:10.866 [2024-09-30 22:00:55.625417] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:10.866 [2024-09-30 22:00:55.625429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.866 [2024-09-30 22:00:55.625436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:10.866 [2024-09-30 22:00:55.625444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:10.866 [2024-09-30 22:00:55.625451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:10.866 [2024-09-30 22:00:55.625458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:10.866 [2024-09-30 22:00:55.625466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.866 [2024-09-30 22:00:55.625475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:10.866 [2024-09-30 22:00:55.625483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:19:10.866 [2024-09-30 22:00:55.625490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.866 [2024-09-30 22:00:55.643177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.866 [2024-09-30 22:00:55.643227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.866 [2024-09-30 22:00:55.643247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.645 ms 00:19:10.866 [2024-09-30 22:00:55.643255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.866 [2024-09-30 22:00:55.643338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.866 [2024-09-30 22:00:55.643347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:10.866 [2024-09-30 22:00:55.643355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:10.866 [2024-09-30 22:00:55.643361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.866 [2024-09-30 22:00:55.652897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.866 [2024-09-30 22:00:55.652942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.866 [2024-09-30 22:00:55.652955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.483 ms 00:19:10.866 [2024-09-30 22:00:55.652966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.866 [2024-09-30 22:00:55.653002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.866 [2024-09-30 22:00:55.653015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.866 [2024-09-30 22:00:55.653034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:10.866 [2024-09-30 22:00:55.653044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.866 [2024-09-30 22:00:55.653442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.866 [2024-09-30 22:00:55.653472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.866 [2024-09-30 22:00:55.653485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:19:10.866 [2024-09-30 22:00:55.653495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.866 [2024-09-30 22:00:55.653664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.866 [2024-09-30 22:00:55.653686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.866 [2024-09-30 22:00:55.653701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:10.866 [2024-09-30 22:00:55.653714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.866 [2024-09-30 22:00:55.658755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.867 [2024-09-30 22:00:55.658788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.867 [2024-09-30 22:00:55.658796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.996 ms 00:19:10.867 [2024-09-30 22:00:55.658804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.867 [2024-09-30 22:00:55.660848] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:10.867 [2024-09-30 22:00:55.660881] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:10.867 [2024-09-30 22:00:55.660898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.867 [2024-09-30 22:00:55.660906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:10.867 [2024-09-30 22:00:55.660919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:19:10.867 [2024-09-30 22:00:55.660926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.867 [2024-09-30 22:00:55.675356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.867 [2024-09-30 22:00:55.675392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:10.867 [2024-09-30 22:00:55.675404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.395 ms 00:19:10.867 [2024-09-30 22:00:55.675416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.125 [2024-09-30 22:00:55.677038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.125 [2024-09-30 22:00:55.677071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:11.125 [2024-09-30 22:00:55.677080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:19:11.125 [2024-09-30 22:00:55.677087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.125 [2024-09-30 22:00:55.678501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.125 [2024-09-30 22:00:55.678530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:11.125 [2024-09-30 22:00:55.678538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:19:11.125 [2024-09-30 22:00:55.678545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.125 [2024-09-30 22:00:55.678850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.125 [2024-09-30 22:00:55.678872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:11.126 [2024-09-30 22:00:55.678881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:19:11.126 [2024-09-30 22:00:55.678889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.694400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.126 [2024-09-30 22:00:55.694443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:11.126 [2024-09-30 22:00:55.694453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.492 ms 00:19:11.126 [2024-09-30 22:00:55.694461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.701733] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:11.126 [2024-09-30 22:00:55.704003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.126 [2024-09-30 22:00:55.704036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:11.126 [2024-09-30 22:00:55.704046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.507 ms 00:19:11.126 [2024-09-30 22:00:55.704054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.704101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.126 [2024-09-30 22:00:55.704114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:11.126 [2024-09-30 22:00:55.704123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:11.126 [2024-09-30 22:00:55.704137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.704230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.126 [2024-09-30 22:00:55.704242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:11.126 [2024-09-30 22:00:55.704252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:11.126 [2024-09-30 22:00:55.704323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.704341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.126 [2024-09-30 22:00:55.704350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:11.126 [2024-09-30 22:00:55.704358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:11.126 [2024-09-30 22:00:55.704368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.704398] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:11.126 [2024-09-30 22:00:55.704408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.126 [2024-09-30 22:00:55.704415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:11.126 [2024-09-30 22:00:55.704424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:11.126 [2024-09-30 22:00:55.704432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.707388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.126 [2024-09-30 22:00:55.707420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:11.126 [2024-09-30 22:00:55.707430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:19:11.126 [2024-09-30 22:00:55.707437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.707502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.126 [2024-09-30 22:00:55.707516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:11.126 [2024-09-30 22:00:55.707524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:11.126 [2024-09-30 22:00:55.707531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.126 [2024-09-30 22:00:55.708470] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 94.028 ms, result 0 00:19:34.400  Copying: 44/1024 [MB] (44 MBps) Copying: 87/1024 [MB] (43 MBps) Copying: 134/1024 [MB] (46 MBps) Copying: 178/1024 [MB] (44 MBps) Copying: 228/1024 [MB] (50 MBps) Copying: 273/1024 [MB] (44 MBps) Copying: 317/1024 [MB] (43 MBps) Copying: 359/1024 [MB] (42 MBps) Copying: 405/1024 [MB] (45 MBps) Copying: 449/1024 [MB] (44 MBps) Copying: 496/1024 [MB] (46 MBps) Copying: 541/1024 [MB] (44 MBps) Copying: 587/1024 [MB] (46 MBps) Copying: 632/1024 [MB] (45 MBps) Copying: 678/1024 [MB] (45 MBps) Copying: 725/1024 [MB] (47 MBps) Copying: 777/1024 [MB] (51 MBps) Copying: 823/1024 [MB] (45 MBps) Copying: 868/1024 [MB] (45 MBps) Copying: 915/1024 [MB] (46 MBps) Copying: 959/1024 [MB] (44 MBps) Copying: 1004/1024 [MB] (44 MBps) Copying: 1023/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 43 MBps)[2024-09-30 22:01:19.161906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.400 [2024-09-30 22:01:19.161969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.400 [2024-09-30 22:01:19.161984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.400 [2024-09-30 22:01:19.161992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.400 [2024-09-30 22:01:19.163097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.400 [2024-09-30 22:01:19.164603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.400 [2024-09-30 22:01:19.164724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.400 [2024-09-30 22:01:19.164785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.485 ms 00:19:34.400 [2024-09-30 22:01:19.164813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.400 [2024-09-30 22:01:19.176934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.400 [2024-09-30 22:01:19.177045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.400 [2024-09-30 22:01:19.177132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.949 ms 00:19:34.400 [2024-09-30 22:01:19.177154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.400 [2024-09-30 22:01:19.194464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.400 [2024-09-30 22:01:19.194569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.400 [2024-09-30 22:01:19.194626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.265 ms 00:19:34.400 [2024-09-30 22:01:19.194648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.400 [2024-09-30 22:01:19.200853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.400 [2024-09-30 22:01:19.200965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:34.400 [2024-09-30 22:01:19.201019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.133 ms 00:19:34.400 [2024-09-30 22:01:19.201041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.400 [2024-09-30 22:01:19.202074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.401 [2024-09-30 22:01:19.202178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.401 [2024-09-30 22:01:19.202250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:19:34.401 [2024-09-30 22:01:19.202272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.401 [2024-09-30 22:01:19.205661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.401 [2024-09-30 22:01:19.205766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.401 [2024-09-30 22:01:19.205815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.272 ms 00:19:34.401 [2024-09-30 22:01:19.205837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.659 [2024-09-30 22:01:19.254834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.659 [2024-09-30 22:01:19.254933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.659 [2024-09-30 22:01:19.255019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.944 ms 00:19:34.659 [2024-09-30 22:01:19.255040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.659 [2024-09-30 22:01:19.256588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.659 [2024-09-30 22:01:19.256686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:34.659 [2024-09-30 22:01:19.256699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:19:34.659 [2024-09-30 22:01:19.256706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.659 [2024-09-30 22:01:19.257794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.659 [2024-09-30 22:01:19.257821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:34.659 [2024-09-30 22:01:19.257829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:19:34.659 [2024-09-30 22:01:19.257836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.659 [2024-09-30 22:01:19.258721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.659 [2024-09-30 22:01:19.258756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.659 [2024-09-30 22:01:19.258764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:19:34.659 [2024-09-30 22:01:19.258771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.659 [2024-09-30 22:01:19.259613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.659 [2024-09-30 22:01:19.259640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.659 [2024-09-30 22:01:19.259648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:19:34.659 [2024-09-30 22:01:19.259654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.659 [2024-09-30 22:01:19.259680] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.659 [2024-09-30 22:01:19.259692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 120576 / 261120 wr_cnt: 1 state: open 00:19:34.659 [2024-09-30 22:01:19.259702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.659 [2024-09-30 22:01:19.259854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.259999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.660 [2024-09-30 22:01:19.260386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.661 [2024-09-30 22:01:19.260460] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.661 [2024-09-30 22:01:19.260467] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 134f5004-12dc-48dc-a030-6aaa397fe814 00:19:34.661 [2024-09-30 22:01:19.260475] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 120576 00:19:34.661 [2024-09-30 22:01:19.260492] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 121536 00:19:34.661 [2024-09-30 22:01:19.260502] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 120576 00:19:34.661 [2024-09-30 22:01:19.260510] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0080 00:19:34.661 [2024-09-30 22:01:19.260520] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.661 [2024-09-30 22:01:19.260527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.661 [2024-09-30 22:01:19.260534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.661 [2024-09-30 22:01:19.260540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.661 [2024-09-30 22:01:19.260547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.661 [2024-09-30 22:01:19.260554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.661 [2024-09-30 22:01:19.260561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.661 [2024-09-30 22:01:19.260569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:19:34.661 [2024-09-30 22:01:19.260576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.262001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.661 [2024-09-30 22:01:19.262018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:34.661 [2024-09-30 22:01:19.262027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:19:34.661 [2024-09-30 22:01:19.262035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.262108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.661 [2024-09-30 22:01:19.262122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:34.661 [2024-09-30 22:01:19.262130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:34.661 [2024-09-30 22:01:19.262138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.266608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.266632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.661 [2024-09-30 22:01:19.266641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.266649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.266700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.266712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.661 [2024-09-30 22:01:19.266719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.266726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.266775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.266784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.661 [2024-09-30 22:01:19.266792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.266798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.266812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.266819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.661 [2024-09-30 22:01:19.266827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.266834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.275485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.275522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.661 [2024-09-30 22:01:19.275531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.275539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.282634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.282676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.661 [2024-09-30 22:01:19.282686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.282694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.282715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.282725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.661 [2024-09-30 22:01:19.282733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.282741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.282776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.282784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.661 [2024-09-30 22:01:19.282791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.282802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.282860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.282873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.661 [2024-09-30 22:01:19.282883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.282891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.282917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.282926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:34.661 [2024-09-30 22:01:19.282933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.282940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.282971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.282979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.661 [2024-09-30 22:01:19.282988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.282996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.283032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.661 [2024-09-30 22:01:19.283040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.661 [2024-09-30 22:01:19.283048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.661 [2024-09-30 22:01:19.283056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.661 [2024-09-30 22:01:19.283173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 124.825 ms, result 0 00:19:36.036 00:19:36.036 00:19:36.036 22:01:20 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:36.036 [2024-09-30 22:01:20.631934] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:19:36.036 [2024-09-30 22:01:20.632051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87923 ] 00:19:36.036 [2024-09-30 22:01:20.760784] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:36.036 [2024-09-30 22:01:20.779600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.036 [2024-09-30 22:01:20.813452] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.296 [2024-09-30 22:01:20.901898] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.296 [2024-09-30 22:01:20.901966] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:36.296 [2024-09-30 22:01:21.054750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.054797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:36.296 [2024-09-30 22:01:21.054809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:36.296 [2024-09-30 22:01:21.054820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.054864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.054874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.296 [2024-09-30 22:01:21.054886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:36.296 [2024-09-30 22:01:21.054893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.054911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:36.296 [2024-09-30 22:01:21.055377] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:36.296 [2024-09-30 22:01:21.055416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.055428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.296 [2024-09-30 22:01:21.055438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:19:36.296 [2024-09-30 22:01:21.055447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.056595] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:36.296 [2024-09-30 22:01:21.058740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.058774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:36.296 [2024-09-30 22:01:21.058787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.147 ms 00:19:36.296 [2024-09-30 22:01:21.058794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.058862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.058873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:36.296 [2024-09-30 22:01:21.058881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:36.296 [2024-09-30 22:01:21.058890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.063873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.063905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.296 [2024-09-30 22:01:21.063913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.923 ms 00:19:36.296 [2024-09-30 22:01:21.063924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.063995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.064004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.296 [2024-09-30 22:01:21.064012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:36.296 [2024-09-30 22:01:21.064019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.064053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.064064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:36.296 [2024-09-30 22:01:21.064072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:36.296 [2024-09-30 22:01:21.064079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.064101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.296 [2024-09-30 22:01:21.065452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.065479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.296 [2024-09-30 22:01:21.065488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.356 ms 00:19:36.296 [2024-09-30 22:01:21.065495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.065521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.065529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:36.296 [2024-09-30 22:01:21.065537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:36.296 [2024-09-30 22:01:21.065544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.065575] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:36.296 [2024-09-30 22:01:21.065596] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:36.296 [2024-09-30 22:01:21.065633] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:36.296 [2024-09-30 22:01:21.065651] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:36.296 [2024-09-30 22:01:21.065752] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:36.296 [2024-09-30 22:01:21.065762] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:36.296 [2024-09-30 22:01:21.065772] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:36.296 [2024-09-30 22:01:21.065784] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:36.296 [2024-09-30 22:01:21.065796] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:36.296 [2024-09-30 22:01:21.065803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:36.296 [2024-09-30 22:01:21.065810] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:36.296 [2024-09-30 22:01:21.065817] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:36.296 [2024-09-30 22:01:21.065824] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:36.296 [2024-09-30 22:01:21.065831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.065842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:36.296 [2024-09-30 22:01:21.065849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:19:36.296 [2024-09-30 22:01:21.065856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.065939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.296 [2024-09-30 22:01:21.065949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:36.296 [2024-09-30 22:01:21.065956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:36.296 [2024-09-30 22:01:21.065965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.296 [2024-09-30 22:01:21.066061] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:36.296 [2024-09-30 22:01:21.066070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:36.296 [2024-09-30 22:01:21.066078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:36.297 [2024-09-30 22:01:21.066099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:36.297 [2024-09-30 22:01:21.066128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.297 [2024-09-30 22:01:21.066143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:36.297 [2024-09-30 22:01:21.066151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:36.297 [2024-09-30 22:01:21.066160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:36.297 [2024-09-30 22:01:21.066168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:36.297 [2024-09-30 22:01:21.066176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:36.297 [2024-09-30 22:01:21.066183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:36.297 [2024-09-30 22:01:21.066217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:36.297 [2024-09-30 22:01:21.066240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:36.297 [2024-09-30 22:01:21.066262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:36.297 [2024-09-30 22:01:21.066284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:36.297 [2024-09-30 22:01:21.066310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:36.297 [2024-09-30 22:01:21.066334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.297 [2024-09-30 22:01:21.066349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:36.297 [2024-09-30 22:01:21.066356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:36.297 [2024-09-30 22:01:21.066364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:36.297 [2024-09-30 22:01:21.066377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:36.297 [2024-09-30 22:01:21.066385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:36.297 [2024-09-30 22:01:21.066392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:36.297 [2024-09-30 22:01:21.066407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:36.297 [2024-09-30 22:01:21.066414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066421] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:36.297 [2024-09-30 22:01:21.066432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:36.297 [2024-09-30 22:01:21.066443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:36.297 [2024-09-30 22:01:21.066463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:36.297 [2024-09-30 22:01:21.066471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:36.297 [2024-09-30 22:01:21.066478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:36.297 [2024-09-30 22:01:21.066486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:36.297 [2024-09-30 22:01:21.066494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:36.297 [2024-09-30 22:01:21.066501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:36.297 [2024-09-30 22:01:21.066510] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:36.297 [2024-09-30 22:01:21.066520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.297 [2024-09-30 22:01:21.066529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:36.297 [2024-09-30 22:01:21.066537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:36.297 [2024-09-30 22:01:21.066545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:36.297 [2024-09-30 22:01:21.066552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:36.297 [2024-09-30 22:01:21.066558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:36.297 [2024-09-30 22:01:21.066567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:36.297 [2024-09-30 22:01:21.066574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:36.297 [2024-09-30 22:01:21.066580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:36.297 [2024-09-30 22:01:21.066587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:36.297 [2024-09-30 22:01:21.066594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:36.297 [2024-09-30 22:01:21.066600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:36.297 [2024-09-30 22:01:21.066607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:36.297 [2024-09-30 22:01:21.066614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:36.297 [2024-09-30 22:01:21.066621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:36.297 [2024-09-30 22:01:21.066627] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:36.297 [2024-09-30 22:01:21.066638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:36.297 [2024-09-30 22:01:21.066646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:36.297 [2024-09-30 22:01:21.066652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:36.297 [2024-09-30 22:01:21.066659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:36.297 [2024-09-30 22:01:21.066666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:36.297 [2024-09-30 22:01:21.066673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.297 [2024-09-30 22:01:21.066682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:36.297 [2024-09-30 22:01:21.066690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.680 ms 00:19:36.297 [2024-09-30 22:01:21.066697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.297 [2024-09-30 22:01:21.084274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.297 [2024-09-30 22:01:21.084423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.297 [2024-09-30 22:01:21.084445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.535 ms 00:19:36.297 [2024-09-30 22:01:21.084453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.297 [2024-09-30 22:01:21.084537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.297 [2024-09-30 22:01:21.084545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:36.297 [2024-09-30 22:01:21.084553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:36.297 [2024-09-30 22:01:21.084560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.297 [2024-09-30 22:01:21.093029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.297 [2024-09-30 22:01:21.093066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.297 [2024-09-30 22:01:21.093077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.418 ms 00:19:36.297 [2024-09-30 22:01:21.093085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.297 [2024-09-30 22:01:21.093124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.297 [2024-09-30 22:01:21.093134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.297 [2024-09-30 22:01:21.093143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:36.297 [2024-09-30 22:01:21.093151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.297 [2024-09-30 22:01:21.093515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.297 [2024-09-30 22:01:21.093546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.297 [2024-09-30 22:01:21.093556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:19:36.297 [2024-09-30 22:01:21.093565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.297 [2024-09-30 22:01:21.093701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.297 [2024-09-30 22:01:21.093722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.297 [2024-09-30 22:01:21.093732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:36.297 [2024-09-30 22:01:21.093742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.298 [2024-09-30 22:01:21.098545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.298 [2024-09-30 22:01:21.098585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.298 [2024-09-30 22:01:21.098596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.780 ms 00:19:36.298 [2024-09-30 22:01:21.098609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.298 [2024-09-30 22:01:21.101058] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:36.298 [2024-09-30 22:01:21.101092] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:36.298 [2024-09-30 22:01:21.101105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.298 [2024-09-30 22:01:21.101113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:36.298 [2024-09-30 22:01:21.101121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.398 ms 00:19:36.298 [2024-09-30 22:01:21.101134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.557 [2024-09-30 22:01:21.115683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.557 [2024-09-30 22:01:21.115810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:36.557 [2024-09-30 22:01:21.115825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.514 ms 00:19:36.557 [2024-09-30 22:01:21.115836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.557 [2024-09-30 22:01:21.117473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.557 [2024-09-30 22:01:21.117503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:36.557 [2024-09-30 22:01:21.117512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.606 ms 00:19:36.557 [2024-09-30 22:01:21.117518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.557 [2024-09-30 22:01:21.118820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.557 [2024-09-30 22:01:21.118851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:36.557 [2024-09-30 22:01:21.118860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:19:36.557 [2024-09-30 22:01:21.118867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.557 [2024-09-30 22:01:21.119179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.557 [2024-09-30 22:01:21.119221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:36.557 [2024-09-30 22:01:21.119230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:19:36.557 [2024-09-30 22:01:21.119237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.557 [2024-09-30 22:01:21.135160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.557 [2024-09-30 22:01:21.135329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:36.557 [2024-09-30 22:01:21.135345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.905 ms 00:19:36.557 [2024-09-30 22:01:21.135353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.557 [2024-09-30 22:01:21.142637] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:36.557 [2024-09-30 22:01:21.144979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.557 [2024-09-30 22:01:21.145012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:36.557 [2024-09-30 22:01:21.145023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.594 ms 00:19:36.557 [2024-09-30 22:01:21.145031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.557 [2024-09-30 22:01:21.145082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.557 [2024-09-30 22:01:21.145093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:36.557 [2024-09-30 22:01:21.145105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:36.557 [2024-09-30 22:01:21.145118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.557 [2024-09-30 22:01:21.146633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.557 [2024-09-30 22:01:21.146668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:36.557 [2024-09-30 22:01:21.146678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:19:36.558 [2024-09-30 22:01:21.146687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.558 [2024-09-30 22:01:21.146709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.558 [2024-09-30 22:01:21.146716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:36.558 [2024-09-30 22:01:21.146724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:36.558 [2024-09-30 22:01:21.146735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.558 [2024-09-30 22:01:21.146789] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:36.558 [2024-09-30 22:01:21.146799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.558 [2024-09-30 22:01:21.146807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:36.558 [2024-09-30 22:01:21.146814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:36.558 [2024-09-30 22:01:21.146824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.558 [2024-09-30 22:01:21.150040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.558 [2024-09-30 22:01:21.150163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:36.558 [2024-09-30 22:01:21.150178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.200 ms 00:19:36.558 [2024-09-30 22:01:21.150202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.558 [2024-09-30 22:01:21.150271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.558 [2024-09-30 22:01:21.150281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:36.558 [2024-09-30 22:01:21.150289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:36.558 [2024-09-30 22:01:21.150296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.558 [2024-09-30 22:01:21.151150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 96.016 ms, result 0 00:19:57.518  Copying: 44/1024 [MB] (44 MBps) Copying: 95/1024 [MB] (50 MBps) Copying: 142/1024 [MB] (47 MBps) Copying: 186/1024 [MB] (44 MBps) Copying: 237/1024 [MB] (50 MBps) Copying: 287/1024 [MB] (50 MBps) Copying: 337/1024 [MB] (49 MBps) Copying: 386/1024 [MB] (48 MBps) Copying: 436/1024 [MB] (49 MBps) Copying: 487/1024 [MB] (51 MBps) Copying: 537/1024 [MB] (49 MBps) Copying: 588/1024 [MB] (50 MBps) Copying: 639/1024 [MB] (51 MBps) Copying: 689/1024 [MB] (50 MBps) Copying: 738/1024 [MB] (49 MBps) Copying: 784/1024 [MB] (46 MBps) Copying: 833/1024 [MB] (48 MBps) Copying: 881/1024 [MB] (47 MBps) Copying: 931/1024 [MB] (50 MBps) Copying: 979/1024 [MB] (48 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-30 22:01:42.317604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.518 [2024-09-30 22:01:42.317666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.518 [2024-09-30 22:01:42.317680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.518 [2024-09-30 22:01:42.317688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.518 [2024-09-30 22:01:42.317713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.518 [2024-09-30 22:01:42.318159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.518 [2024-09-30 22:01:42.318176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.518 [2024-09-30 22:01:42.318185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:19:57.518 [2024-09-30 22:01:42.318219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.518 [2024-09-30 22:01:42.318433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.518 [2024-09-30 22:01:42.318443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.518 [2024-09-30 22:01:42.318452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:19:57.518 [2024-09-30 22:01:42.318464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.518 [2024-09-30 22:01:42.322609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.518 [2024-09-30 22:01:42.322643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.518 [2024-09-30 22:01:42.322653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.131 ms 00:19:57.518 [2024-09-30 22:01:42.322661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.518 [2024-09-30 22:01:42.328811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.518 [2024-09-30 22:01:42.328844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:57.518 [2024-09-30 22:01:42.328855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.116 ms 00:19:57.518 [2024-09-30 22:01:42.328862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.518 [2024-09-30 22:01:42.330365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.518 [2024-09-30 22:01:42.330398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.518 [2024-09-30 22:01:42.330407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:19:57.778 [2024-09-30 22:01:42.330414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.778 [2024-09-30 22:01:42.333719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.778 [2024-09-30 22:01:42.333762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.778 [2024-09-30 22:01:42.333771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.277 ms 00:19:57.778 [2024-09-30 22:01:42.333778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.778 [2024-09-30 22:01:42.392967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.778 [2024-09-30 22:01:42.393014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.778 [2024-09-30 22:01:42.393024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.149 ms 00:19:57.778 [2024-09-30 22:01:42.393033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.778 [2024-09-30 22:01:42.394961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.778 [2024-09-30 22:01:42.395005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:57.778 [2024-09-30 22:01:42.395017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.904 ms 00:19:57.778 [2024-09-30 22:01:42.395025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.778 [2024-09-30 22:01:42.396107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.778 [2024-09-30 22:01:42.396140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:57.778 [2024-09-30 22:01:42.396149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.050 ms 00:19:57.778 [2024-09-30 22:01:42.396166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.778 [2024-09-30 22:01:42.396997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.778 [2024-09-30 22:01:42.397030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.778 [2024-09-30 22:01:42.397051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.788 ms 00:19:57.778 [2024-09-30 22:01:42.397058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.778 [2024-09-30 22:01:42.397862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.778 [2024-09-30 22:01:42.397892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.778 [2024-09-30 22:01:42.397901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:19:57.778 [2024-09-30 22:01:42.397908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.778 [2024-09-30 22:01:42.397934] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.778 [2024-09-30 22:01:42.397948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:19:57.778 [2024-09-30 22:01:42.397958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.397966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.397974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.397981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.397988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.397995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.778 [2024-09-30 22:01:42.398185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.779 [2024-09-30 22:01:42.398707] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.779 [2024-09-30 22:01:42.398725] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 134f5004-12dc-48dc-a030-6aaa397fe814 00:19:57.779 [2024-09-30 22:01:42.398733] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:19:57.779 [2024-09-30 22:01:42.398743] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 11456 00:19:57.779 [2024-09-30 22:01:42.398752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 10496 00:19:57.779 [2024-09-30 22:01:42.398760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0915 00:19:57.779 [2024-09-30 22:01:42.398767] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.779 [2024-09-30 22:01:42.398774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.779 [2024-09-30 22:01:42.398781] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.779 [2024-09-30 22:01:42.398787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.779 [2024-09-30 22:01:42.398793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.779 [2024-09-30 22:01:42.398800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.779 [2024-09-30 22:01:42.398807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.779 [2024-09-30 22:01:42.398815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:19:57.779 [2024-09-30 22:01:42.398822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.779 [2024-09-30 22:01:42.400296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.779 [2024-09-30 22:01:42.400326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.779 [2024-09-30 22:01:42.400334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:19:57.779 [2024-09-30 22:01:42.400342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.779 [2024-09-30 22:01:42.400426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.779 [2024-09-30 22:01:42.400434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.779 [2024-09-30 22:01:42.400446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:57.779 [2024-09-30 22:01:42.400453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.779 [2024-09-30 22:01:42.404974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.779 [2024-09-30 22:01:42.405007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.779 [2024-09-30 22:01:42.405016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.405023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.405073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.405081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.780 [2024-09-30 22:01:42.405089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.405096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.405132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.405141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.780 [2024-09-30 22:01:42.405149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.405157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.405171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.405178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.780 [2024-09-30 22:01:42.405197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.405205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.413977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.414016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.780 [2024-09-30 22:01:42.414026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.414035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.421168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.421216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.780 [2024-09-30 22:01:42.421226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.421234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.421273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.421287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.780 [2024-09-30 22:01:42.421295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.421302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.421325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.421333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.780 [2024-09-30 22:01:42.421346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.421353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.421412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.421427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.780 [2024-09-30 22:01:42.421438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.421445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.421471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.421480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.780 [2024-09-30 22:01:42.421487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.421494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.421530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.421539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.780 [2024-09-30 22:01:42.421549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.421559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.421601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.780 [2024-09-30 22:01:42.421611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.780 [2024-09-30 22:01:42.421619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.780 [2024-09-30 22:01:42.421625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.780 [2024-09-30 22:01:42.421737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 104.105 ms, result 0 00:19:58.038 00:19:58.038 00:19:58.038 22:01:42 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:00.568 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 86952 00:20:00.568 22:01:44 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 86952 ']' 00:20:00.568 22:01:44 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 86952 00:20:00.568 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (86952) - No such process 00:20:00.568 Process with pid 86952 is not found 00:20:00.568 22:01:44 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 86952 is not found' 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:00.568 Remove shared memory files 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:00.568 22:01:44 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:00.568 00:20:00.568 real 1m55.256s 00:20:00.568 user 1m45.361s 00:20:00.568 sys 0m11.605s 00:20:00.568 22:01:44 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:00.568 22:01:44 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:00.568 ************************************ 00:20:00.568 END TEST ftl_restore 00:20:00.568 ************************************ 00:20:00.568 22:01:44 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:00.568 22:01:44 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:00.568 22:01:44 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:00.568 22:01:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:00.568 ************************************ 00:20:00.568 START TEST ftl_dirty_shutdown 00:20:00.568 ************************************ 00:20:00.568 22:01:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:00.568 * Looking for test storage... 00:20:00.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.568 --rc genhtml_branch_coverage=1 00:20:00.568 --rc genhtml_function_coverage=1 00:20:00.568 --rc genhtml_legend=1 00:20:00.568 --rc geninfo_all_blocks=1 00:20:00.568 --rc geninfo_unexecuted_blocks=1 00:20:00.568 00:20:00.568 ' 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.568 --rc genhtml_branch_coverage=1 00:20:00.568 --rc genhtml_function_coverage=1 00:20:00.568 --rc genhtml_legend=1 00:20:00.568 --rc geninfo_all_blocks=1 00:20:00.568 --rc geninfo_unexecuted_blocks=1 00:20:00.568 00:20:00.568 ' 00:20:00.568 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:00.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.568 --rc genhtml_branch_coverage=1 00:20:00.568 --rc genhtml_function_coverage=1 00:20:00.568 --rc genhtml_legend=1 00:20:00.568 --rc geninfo_all_blocks=1 00:20:00.568 --rc geninfo_unexecuted_blocks=1 00:20:00.569 00:20:00.569 ' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:00.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:00.569 --rc genhtml_branch_coverage=1 00:20:00.569 --rc genhtml_function_coverage=1 00:20:00.569 --rc genhtml_legend=1 00:20:00.569 --rc geninfo_all_blocks=1 00:20:00.569 --rc geninfo_unexecuted_blocks=1 00:20:00.569 00:20:00.569 ' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=88254 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 88254 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 88254 ']' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.569 22:01:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:00.569 [2024-09-30 22:01:45.194681] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:20:00.569 [2024-09-30 22:01:45.194808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88254 ] 00:20:00.569 [2024-09-30 22:01:45.323552] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:00.569 [2024-09-30 22:01:45.344465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.569 [2024-09-30 22:01:45.378595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:01.502 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:01.759 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:01.759 { 00:20:01.759 "name": "nvme0n1", 00:20:01.759 "aliases": [ 00:20:01.759 "869ffd08-e42d-4abd-a976-fa546557aee7" 00:20:01.759 ], 00:20:01.759 "product_name": "NVMe disk", 00:20:01.759 "block_size": 4096, 00:20:01.759 "num_blocks": 1310720, 00:20:01.759 "uuid": "869ffd08-e42d-4abd-a976-fa546557aee7", 00:20:01.759 "numa_id": -1, 00:20:01.759 "assigned_rate_limits": { 00:20:01.759 "rw_ios_per_sec": 0, 00:20:01.759 "rw_mbytes_per_sec": 0, 00:20:01.759 "r_mbytes_per_sec": 0, 00:20:01.759 "w_mbytes_per_sec": 0 00:20:01.759 }, 00:20:01.759 "claimed": true, 00:20:01.759 "claim_type": "read_many_write_one", 00:20:01.759 "zoned": false, 00:20:01.759 "supported_io_types": { 00:20:01.759 "read": true, 00:20:01.759 "write": true, 00:20:01.759 "unmap": true, 00:20:01.759 "flush": true, 00:20:01.759 "reset": true, 00:20:01.759 "nvme_admin": true, 00:20:01.759 "nvme_io": true, 00:20:01.759 "nvme_io_md": false, 00:20:01.759 "write_zeroes": true, 00:20:01.759 "zcopy": false, 00:20:01.759 "get_zone_info": false, 00:20:01.759 "zone_management": false, 00:20:01.759 "zone_append": false, 00:20:01.759 "compare": true, 00:20:01.759 "compare_and_write": false, 00:20:01.759 "abort": true, 00:20:01.759 "seek_hole": false, 00:20:01.759 "seek_data": false, 00:20:01.759 "copy": true, 00:20:01.759 "nvme_iov_md": false 00:20:01.759 }, 00:20:01.759 "driver_specific": { 00:20:01.759 "nvme": [ 00:20:01.759 { 00:20:01.759 "pci_address": "0000:00:11.0", 00:20:01.759 "trid": { 00:20:01.759 "trtype": "PCIe", 00:20:01.759 "traddr": "0000:00:11.0" 00:20:01.759 }, 00:20:01.759 "ctrlr_data": { 00:20:01.759 "cntlid": 0, 00:20:01.759 "vendor_id": "0x1b36", 00:20:01.759 "model_number": "QEMU NVMe Ctrl", 00:20:01.759 "serial_number": "12341", 00:20:01.759 "firmware_revision": "8.0.0", 00:20:01.759 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:01.759 "oacs": { 00:20:01.759 "security": 0, 00:20:01.759 "format": 1, 00:20:01.759 "firmware": 0, 00:20:01.759 "ns_manage": 1 00:20:01.759 }, 00:20:01.759 "multi_ctrlr": false, 00:20:01.759 "ana_reporting": false 00:20:01.759 }, 00:20:01.759 "vs": { 00:20:01.759 "nvme_version": "1.4" 00:20:01.759 }, 00:20:01.759 "ns_data": { 00:20:01.759 "id": 1, 00:20:01.759 "can_share": false 00:20:01.759 } 00:20:01.759 } 00:20:01.759 ], 00:20:01.759 "mp_policy": "active_passive" 00:20:01.759 } 00:20:01.759 } 00:20:01.759 ]' 00:20:01.759 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:01.759 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:01.759 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:01.759 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:01.759 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:01.760 22:01:46 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:20:01.760 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:01.760 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:01.760 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:01.760 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:01.760 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:02.017 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a787b082-6884-49f0-a5f0-29cc6b84d769 00:20:02.017 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:02.017 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a787b082-6884-49f0-a5f0-29cc6b84d769 00:20:02.274 22:01:46 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:02.532 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=9bc1ca00-9ed4-4686-ae84-98a7e6d013d5 00:20:02.532 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9bc1ca00-9ed4-4686-ae84-98a7e6d013d5 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:02.790 { 00:20:02.790 "name": "4e6b2eb2-7405-430f-9941-cb7ed13fc7d2", 00:20:02.790 "aliases": [ 00:20:02.790 "lvs/nvme0n1p0" 00:20:02.790 ], 00:20:02.790 "product_name": "Logical Volume", 00:20:02.790 "block_size": 4096, 00:20:02.790 "num_blocks": 26476544, 00:20:02.790 "uuid": "4e6b2eb2-7405-430f-9941-cb7ed13fc7d2", 00:20:02.790 "assigned_rate_limits": { 00:20:02.790 "rw_ios_per_sec": 0, 00:20:02.790 "rw_mbytes_per_sec": 0, 00:20:02.790 "r_mbytes_per_sec": 0, 00:20:02.790 "w_mbytes_per_sec": 0 00:20:02.790 }, 00:20:02.790 "claimed": false, 00:20:02.790 "zoned": false, 00:20:02.790 "supported_io_types": { 00:20:02.790 "read": true, 00:20:02.790 "write": true, 00:20:02.790 "unmap": true, 00:20:02.790 "flush": false, 00:20:02.790 "reset": true, 00:20:02.790 "nvme_admin": false, 00:20:02.790 "nvme_io": false, 00:20:02.790 "nvme_io_md": false, 00:20:02.790 "write_zeroes": true, 00:20:02.790 "zcopy": false, 00:20:02.790 "get_zone_info": false, 00:20:02.790 "zone_management": false, 00:20:02.790 "zone_append": false, 00:20:02.790 "compare": false, 00:20:02.790 "compare_and_write": false, 00:20:02.790 "abort": false, 00:20:02.790 "seek_hole": true, 00:20:02.790 "seek_data": true, 00:20:02.790 "copy": false, 00:20:02.790 "nvme_iov_md": false 00:20:02.790 }, 00:20:02.790 "driver_specific": { 00:20:02.790 "lvol": { 00:20:02.790 "lvol_store_uuid": "9bc1ca00-9ed4-4686-ae84-98a7e6d013d5", 00:20:02.790 "base_bdev": "nvme0n1", 00:20:02.790 "thin_provision": true, 00:20:02.790 "num_allocated_clusters": 0, 00:20:02.790 "snapshot": false, 00:20:02.790 "clone": false, 00:20:02.790 "esnap_clone": false 00:20:02.790 } 00:20:02.790 } 00:20:02.790 } 00:20:02.790 ]' 00:20:02.790 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:03.048 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:03.048 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:03.048 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:03.048 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:03.048 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:03.048 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:03.048 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:03.048 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:03.306 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:03.306 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:03.306 22:01:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:03.306 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:03.307 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:03.307 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:03.307 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:03.307 22:01:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:03.307 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:03.307 { 00:20:03.307 "name": "4e6b2eb2-7405-430f-9941-cb7ed13fc7d2", 00:20:03.307 "aliases": [ 00:20:03.307 "lvs/nvme0n1p0" 00:20:03.307 ], 00:20:03.307 "product_name": "Logical Volume", 00:20:03.307 "block_size": 4096, 00:20:03.307 "num_blocks": 26476544, 00:20:03.307 "uuid": "4e6b2eb2-7405-430f-9941-cb7ed13fc7d2", 00:20:03.307 "assigned_rate_limits": { 00:20:03.307 "rw_ios_per_sec": 0, 00:20:03.307 "rw_mbytes_per_sec": 0, 00:20:03.307 "r_mbytes_per_sec": 0, 00:20:03.307 "w_mbytes_per_sec": 0 00:20:03.307 }, 00:20:03.307 "claimed": false, 00:20:03.307 "zoned": false, 00:20:03.307 "supported_io_types": { 00:20:03.307 "read": true, 00:20:03.307 "write": true, 00:20:03.307 "unmap": true, 00:20:03.307 "flush": false, 00:20:03.307 "reset": true, 00:20:03.307 "nvme_admin": false, 00:20:03.307 "nvme_io": false, 00:20:03.307 "nvme_io_md": false, 00:20:03.307 "write_zeroes": true, 00:20:03.307 "zcopy": false, 00:20:03.307 "get_zone_info": false, 00:20:03.307 "zone_management": false, 00:20:03.307 "zone_append": false, 00:20:03.307 "compare": false, 00:20:03.307 "compare_and_write": false, 00:20:03.307 "abort": false, 00:20:03.307 "seek_hole": true, 00:20:03.307 "seek_data": true, 00:20:03.307 "copy": false, 00:20:03.307 "nvme_iov_md": false 00:20:03.307 }, 00:20:03.307 "driver_specific": { 00:20:03.307 "lvol": { 00:20:03.307 "lvol_store_uuid": "9bc1ca00-9ed4-4686-ae84-98a7e6d013d5", 00:20:03.307 "base_bdev": "nvme0n1", 00:20:03.307 "thin_provision": true, 00:20:03.307 "num_allocated_clusters": 0, 00:20:03.307 "snapshot": false, 00:20:03.307 "clone": false, 00:20:03.307 "esnap_clone": false 00:20:03.307 } 00:20:03.307 } 00:20:03.307 } 00:20:03.307 ]' 00:20:03.307 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:03.565 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:03.565 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:03.565 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:03.565 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:03.565 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:03.565 22:01:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:03.565 22:01:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:03.565 22:01:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:03.823 { 00:20:03.823 "name": "4e6b2eb2-7405-430f-9941-cb7ed13fc7d2", 00:20:03.823 "aliases": [ 00:20:03.823 "lvs/nvme0n1p0" 00:20:03.823 ], 00:20:03.823 "product_name": "Logical Volume", 00:20:03.823 "block_size": 4096, 00:20:03.823 "num_blocks": 26476544, 00:20:03.823 "uuid": "4e6b2eb2-7405-430f-9941-cb7ed13fc7d2", 00:20:03.823 "assigned_rate_limits": { 00:20:03.823 "rw_ios_per_sec": 0, 00:20:03.823 "rw_mbytes_per_sec": 0, 00:20:03.823 "r_mbytes_per_sec": 0, 00:20:03.823 "w_mbytes_per_sec": 0 00:20:03.823 }, 00:20:03.823 "claimed": false, 00:20:03.823 "zoned": false, 00:20:03.823 "supported_io_types": { 00:20:03.823 "read": true, 00:20:03.823 "write": true, 00:20:03.823 "unmap": true, 00:20:03.823 "flush": false, 00:20:03.823 "reset": true, 00:20:03.823 "nvme_admin": false, 00:20:03.823 "nvme_io": false, 00:20:03.823 "nvme_io_md": false, 00:20:03.823 "write_zeroes": true, 00:20:03.823 "zcopy": false, 00:20:03.823 "get_zone_info": false, 00:20:03.823 "zone_management": false, 00:20:03.823 "zone_append": false, 00:20:03.823 "compare": false, 00:20:03.823 "compare_and_write": false, 00:20:03.823 "abort": false, 00:20:03.823 "seek_hole": true, 00:20:03.823 "seek_data": true, 00:20:03.823 "copy": false, 00:20:03.823 "nvme_iov_md": false 00:20:03.823 }, 00:20:03.823 "driver_specific": { 00:20:03.823 "lvol": { 00:20:03.823 "lvol_store_uuid": "9bc1ca00-9ed4-4686-ae84-98a7e6d013d5", 00:20:03.823 "base_bdev": "nvme0n1", 00:20:03.823 "thin_provision": true, 00:20:03.823 "num_allocated_clusters": 0, 00:20:03.823 "snapshot": false, 00:20:03.823 "clone": false, 00:20:03.823 "esnap_clone": false 00:20:03.823 } 00:20:03.823 } 00:20:03.823 } 00:20:03.823 ]' 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:03.823 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 --l2p_dram_limit 10' 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:04.082 22:01:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4e6b2eb2-7405-430f-9941-cb7ed13fc7d2 --l2p_dram_limit 10 -c nvc0n1p0 00:20:04.082 [2024-09-30 22:01:48.836743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.082 [2024-09-30 22:01:48.836793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:04.082 [2024-09-30 22:01:48.836808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:04.082 [2024-09-30 22:01:48.836816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.082 [2024-09-30 22:01:48.836866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.082 [2024-09-30 22:01:48.836876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.082 [2024-09-30 22:01:48.836887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:04.083 [2024-09-30 22:01:48.836896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.836919] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:04.083 [2024-09-30 22:01:48.837154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:04.083 [2024-09-30 22:01:48.837170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.837179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.083 [2024-09-30 22:01:48.837200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:04.083 [2024-09-30 22:01:48.837208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.837239] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5a2be95a-d5b1-4b79-bff0-a31df33a7add 00:20:04.083 [2024-09-30 22:01:48.838303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.838330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:04.083 [2024-09-30 22:01:48.838340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:04.083 [2024-09-30 22:01:48.838352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.843677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.843712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.083 [2024-09-30 22:01:48.843722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.252 ms 00:20:04.083 [2024-09-30 22:01:48.843735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.843814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.843824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.083 [2024-09-30 22:01:48.843835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:04.083 [2024-09-30 22:01:48.843846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.843886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.843897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:04.083 [2024-09-30 22:01:48.843904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:04.083 [2024-09-30 22:01:48.843913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.843934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:04.083 [2024-09-30 22:01:48.845433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.845463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.083 [2024-09-30 22:01:48.845474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.502 ms 00:20:04.083 [2024-09-30 22:01:48.845481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.845517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.845525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:04.083 [2024-09-30 22:01:48.845537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:04.083 [2024-09-30 22:01:48.845543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.845561] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:04.083 [2024-09-30 22:01:48.845697] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:04.083 [2024-09-30 22:01:48.845712] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:04.083 [2024-09-30 22:01:48.845722] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:04.083 [2024-09-30 22:01:48.845734] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:04.083 [2024-09-30 22:01:48.845745] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:04.083 [2024-09-30 22:01:48.845761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:04.083 [2024-09-30 22:01:48.845768] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:04.083 [2024-09-30 22:01:48.845777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:04.083 [2024-09-30 22:01:48.845785] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:04.083 [2024-09-30 22:01:48.845794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.845801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:04.083 [2024-09-30 22:01:48.845810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:20:04.083 [2024-09-30 22:01:48.845817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.845903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.083 [2024-09-30 22:01:48.845910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:04.083 [2024-09-30 22:01:48.845919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:04.083 [2024-09-30 22:01:48.845925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.083 [2024-09-30 22:01:48.846022] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:04.083 [2024-09-30 22:01:48.846030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:04.083 [2024-09-30 22:01:48.846044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.083 [2024-09-30 22:01:48.846051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:04.083 [2024-09-30 22:01:48.846070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:04.083 [2024-09-30 22:01:48.846087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:04.083 [2024-09-30 22:01:48.846096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.083 [2024-09-30 22:01:48.846112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:04.083 [2024-09-30 22:01:48.846120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:04.083 [2024-09-30 22:01:48.846130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.083 [2024-09-30 22:01:48.846138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:04.083 [2024-09-30 22:01:48.846147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:04.083 [2024-09-30 22:01:48.846155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:04.083 [2024-09-30 22:01:48.846172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:04.083 [2024-09-30 22:01:48.846181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:04.083 [2024-09-30 22:01:48.846213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.083 [2024-09-30 22:01:48.846229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:04.083 [2024-09-30 22:01:48.846237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.083 [2024-09-30 22:01:48.846254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:04.083 [2024-09-30 22:01:48.846264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.083 [2024-09-30 22:01:48.846283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:04.083 [2024-09-30 22:01:48.846290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.083 [2024-09-30 22:01:48.846307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:04.083 [2024-09-30 22:01:48.846317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.083 [2024-09-30 22:01:48.846334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:04.083 [2024-09-30 22:01:48.846341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:04.083 [2024-09-30 22:01:48.846350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.083 [2024-09-30 22:01:48.846358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:04.083 [2024-09-30 22:01:48.846367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:04.083 [2024-09-30 22:01:48.846375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:04.083 [2024-09-30 22:01:48.846391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:04.083 [2024-09-30 22:01:48.846400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846407] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:04.083 [2024-09-30 22:01:48.846418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:04.083 [2024-09-30 22:01:48.846429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.083 [2024-09-30 22:01:48.846439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.083 [2024-09-30 22:01:48.846447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:04.083 [2024-09-30 22:01:48.846456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:04.083 [2024-09-30 22:01:48.846464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:04.084 [2024-09-30 22:01:48.846473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:04.084 [2024-09-30 22:01:48.846480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:04.084 [2024-09-30 22:01:48.846489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:04.084 [2024-09-30 22:01:48.846499] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:04.084 [2024-09-30 22:01:48.846509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.084 [2024-09-30 22:01:48.846523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:04.084 [2024-09-30 22:01:48.846532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:04.084 [2024-09-30 22:01:48.846540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:04.084 [2024-09-30 22:01:48.846550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:04.084 [2024-09-30 22:01:48.846558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:04.084 [2024-09-30 22:01:48.846567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:04.084 [2024-09-30 22:01:48.846574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:04.084 [2024-09-30 22:01:48.846584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:04.084 [2024-09-30 22:01:48.846591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:04.084 [2024-09-30 22:01:48.846600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:04.084 [2024-09-30 22:01:48.846606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:04.084 [2024-09-30 22:01:48.846615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:04.084 [2024-09-30 22:01:48.846622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:04.084 [2024-09-30 22:01:48.846631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:04.084 [2024-09-30 22:01:48.846638] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:04.084 [2024-09-30 22:01:48.846648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.084 [2024-09-30 22:01:48.846656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:04.084 [2024-09-30 22:01:48.846664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:04.084 [2024-09-30 22:01:48.846672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:04.084 [2024-09-30 22:01:48.846680] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:04.084 [2024-09-30 22:01:48.846688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.084 [2024-09-30 22:01:48.846698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:04.084 [2024-09-30 22:01:48.846705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:20:04.084 [2024-09-30 22:01:48.846718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.084 [2024-09-30 22:01:48.846755] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:04.084 [2024-09-30 22:01:48.846766] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:06.612 [2024-09-30 22:01:51.245371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.245429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:06.612 [2024-09-30 22:01:51.245447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2398.610 ms 00:20:06.612 [2024-09-30 22:01:51.245457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.253934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.253980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.612 [2024-09-30 22:01:51.253992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.398 ms 00:20:06.612 [2024-09-30 22:01:51.254004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.254086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.254100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.612 [2024-09-30 22:01:51.254113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:06.612 [2024-09-30 22:01:51.254122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.262110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.262151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.612 [2024-09-30 22:01:51.262160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.944 ms 00:20:06.612 [2024-09-30 22:01:51.262176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.262214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.262225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.612 [2024-09-30 22:01:51.262233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:06.612 [2024-09-30 22:01:51.262241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.262557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.262587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.612 [2024-09-30 22:01:51.262596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:20:06.612 [2024-09-30 22:01:51.262607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.262705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.262715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.612 [2024-09-30 22:01:51.262724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:06.612 [2024-09-30 22:01:51.262733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.276236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.276277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.612 [2024-09-30 22:01:51.276289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.483 ms 00:20:06.612 [2024-09-30 22:01:51.276299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.287572] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:06.612 [2024-09-30 22:01:51.290555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.290585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.612 [2024-09-30 22:01:51.290599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.151 ms 00:20:06.612 [2024-09-30 22:01:51.290606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.339794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.339832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:06.612 [2024-09-30 22:01:51.339847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.160 ms 00:20:06.612 [2024-09-30 22:01:51.339857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.340028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.340043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.612 [2024-09-30 22:01:51.340054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:20:06.612 [2024-09-30 22:01:51.340061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.343060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.343097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:06.612 [2024-09-30 22:01:51.343108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.967 ms 00:20:06.612 [2024-09-30 22:01:51.343115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.345543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.345572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:06.612 [2024-09-30 22:01:51.345583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.373 ms 00:20:06.612 [2024-09-30 22:01:51.345590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.345904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.345924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.612 [2024-09-30 22:01:51.345937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:20:06.612 [2024-09-30 22:01:51.345944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.373330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.373366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:06.612 [2024-09-30 22:01:51.373378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.364 ms 00:20:06.612 [2024-09-30 22:01:51.373388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.377166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.377222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:06.612 [2024-09-30 22:01:51.377233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.745 ms 00:20:06.612 [2024-09-30 22:01:51.377241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.380157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.380197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:06.612 [2024-09-30 22:01:51.380209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.892 ms 00:20:06.612 [2024-09-30 22:01:51.380218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.383330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.383362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.612 [2024-09-30 22:01:51.383376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.089 ms 00:20:06.612 [2024-09-30 22:01:51.383384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.383409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.383417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.612 [2024-09-30 22:01:51.383427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:06.612 [2024-09-30 22:01:51.383434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.383507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.612 [2024-09-30 22:01:51.383518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.612 [2024-09-30 22:01:51.383527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:06.612 [2024-09-30 22:01:51.383534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.612 [2024-09-30 22:01:51.384404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2547.257 ms, result 0 00:20:06.612 { 00:20:06.612 "name": "ftl0", 00:20:06.612 "uuid": "5a2be95a-d5b1-4b79-bff0-a31df33a7add" 00:20:06.612 } 00:20:06.612 22:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:06.612 22:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:06.870 22:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:06.870 22:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:06.870 22:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:07.128 /dev/nbd0 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:07.128 1+0 records in 00:20:07.128 1+0 records out 00:20:07.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276289 s, 14.8 MB/s 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:20:07.128 22:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:07.128 [2024-09-30 22:01:51.893093] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:20:07.128 [2024-09-30 22:01:51.893220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88379 ] 00:20:07.386 [2024-09-30 22:01:52.020827] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:07.386 [2024-09-30 22:01:52.041410] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.386 [2024-09-30 22:01:52.075221] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.103  Copying: 197/1024 [MB] (197 MBps) Copying: 395/1024 [MB] (198 MBps) Copying: 627/1024 [MB] (232 MBps) Copying: 887/1024 [MB] (259 MBps) Copying: 1024/1024 [MB] (average 226 MBps) 00:20:12.103 00:20:12.103 22:01:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:14.004 22:01:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:14.262 [2024-09-30 22:01:58.871138] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:20:14.262 [2024-09-30 22:01:58.871263] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88455 ] 00:20:14.262 [2024-09-30 22:01:58.998415] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:14.262 [2024-09-30 22:01:59.020144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.262 [2024-09-30 22:01:59.053226] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.500  Copying: 29/1024 [MB] (29 MBps) Copying: 60/1024 [MB] (31 MBps) Copying: 93/1024 [MB] (33 MBps) Copying: 124/1024 [MB] (30 MBps) Copying: 156/1024 [MB] (32 MBps) Copying: 188/1024 [MB] (31 MBps) Copying: 220/1024 [MB] (32 MBps) Copying: 255/1024 [MB] (34 MBps) Copying: 289/1024 [MB] (34 MBps) Copying: 321/1024 [MB] (31 MBps) Copying: 355/1024 [MB] (34 MBps) Copying: 387/1024 [MB] (32 MBps) Copying: 417/1024 [MB] (29 MBps) Copying: 448/1024 [MB] (30 MBps) Copying: 482/1024 [MB] (34 MBps) Copying: 513/1024 [MB] (30 MBps) Copying: 546/1024 [MB] (33 MBps) Copying: 583/1024 [MB] (36 MBps) Copying: 613/1024 [MB] (29 MBps) Copying: 643/1024 [MB] (30 MBps) Copying: 673/1024 [MB] (29 MBps) Copying: 703/1024 [MB] (30 MBps) Copying: 735/1024 [MB] (31 MBps) Copying: 764/1024 [MB] (29 MBps) Copying: 794/1024 [MB] (30 MBps) Copying: 827/1024 [MB] (33 MBps) Copying: 858/1024 [MB] (30 MBps) Copying: 895/1024 [MB] (37 MBps) Copying: 933/1024 [MB] (37 MBps) Copying: 967/1024 [MB] (34 MBps) Copying: 998/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 32 MBps) 00:20:46.500 00:20:46.500 22:02:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:20:46.500 22:02:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:20:46.500 22:02:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:46.760 [2024-09-30 22:02:31.453291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.453334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:46.760 [2024-09-30 22:02:31.453345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:46.760 [2024-09-30 22:02:31.453353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.453371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:46.760 [2024-09-30 22:02:31.453783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.453806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:46.760 [2024-09-30 22:02:31.453819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:20:46.760 [2024-09-30 22:02:31.453825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.455702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.455730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:46.760 [2024-09-30 22:02:31.455740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.857 ms 00:20:46.760 [2024-09-30 22:02:31.455746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.468916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.468948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:46.760 [2024-09-30 22:02:31.468958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.153 ms 00:20:46.760 [2024-09-30 22:02:31.468967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.473855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.473882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:46.760 [2024-09-30 22:02:31.473892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.860 ms 00:20:46.760 [2024-09-30 22:02:31.473899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.474882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.474911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:46.760 [2024-09-30 22:02:31.474920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.926 ms 00:20:46.760 [2024-09-30 22:02:31.474926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.479029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.479060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:46.760 [2024-09-30 22:02:31.479071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.072 ms 00:20:46.760 [2024-09-30 22:02:31.479077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.479185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.479204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:46.760 [2024-09-30 22:02:31.479212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:46.760 [2024-09-30 22:02:31.479218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.480916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.480945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:46.760 [2024-09-30 22:02:31.480955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:20:46.760 [2024-09-30 22:02:31.480961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.482689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.482717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:46.760 [2024-09-30 22:02:31.482725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:20:46.760 [2024-09-30 22:02:31.482730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.483851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.483878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:46.760 [2024-09-30 22:02:31.483887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:20:46.760 [2024-09-30 22:02:31.483892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.485012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.760 [2024-09-30 22:02:31.485041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:46.760 [2024-09-30 22:02:31.485050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.071 ms 00:20:46.760 [2024-09-30 22:02:31.485056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.760 [2024-09-30 22:02:31.485083] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:46.760 [2024-09-30 22:02:31.485095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:46.760 [2024-09-30 22:02:31.485606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:46.761 [2024-09-30 22:02:31.485788] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:46.761 [2024-09-30 22:02:31.485800] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a2be95a-d5b1-4b79-bff0-a31df33a7add 00:20:46.761 [2024-09-30 22:02:31.485807] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:46.761 [2024-09-30 22:02:31.485813] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:46.761 [2024-09-30 22:02:31.485819] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:46.761 [2024-09-30 22:02:31.485826] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:46.761 [2024-09-30 22:02:31.485831] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:46.761 [2024-09-30 22:02:31.485838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:46.761 [2024-09-30 22:02:31.485844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:46.761 [2024-09-30 22:02:31.485849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:46.761 [2024-09-30 22:02:31.485855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:46.761 [2024-09-30 22:02:31.485862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.761 [2024-09-30 22:02:31.485868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:46.761 [2024-09-30 22:02:31.485875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:20:46.761 [2024-09-30 22:02:31.485881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.487260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.761 [2024-09-30 22:02:31.487281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:46.761 [2024-09-30 22:02:31.487290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:20:46.761 [2024-09-30 22:02:31.487296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.487366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:46.761 [2024-09-30 22:02:31.487373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:46.761 [2024-09-30 22:02:31.487381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:46.761 [2024-09-30 22:02:31.487388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.492214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.492242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:46.761 [2024-09-30 22:02:31.492251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.492258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.492308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.492314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:46.761 [2024-09-30 22:02:31.492322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.492331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.492385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.492393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:46.761 [2024-09-30 22:02:31.492400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.492406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.492420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.492426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:46.761 [2024-09-30 22:02:31.492433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.492438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.500717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.500752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:46.761 [2024-09-30 22:02:31.500762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.500770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.507868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.507899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:46.761 [2024-09-30 22:02:31.507908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.507914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.507955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.507966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:46.761 [2024-09-30 22:02:31.507974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.507980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.508022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.508029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:46.761 [2024-09-30 22:02:31.508037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.508045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.508107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.508114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:46.761 [2024-09-30 22:02:31.508122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.508127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.508150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.508157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:46.761 [2024-09-30 22:02:31.508164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.508170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.508215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.508224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:46.761 [2024-09-30 22:02:31.508231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.508237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.508271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:46.761 [2024-09-30 22:02:31.508278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:46.761 [2024-09-30 22:02:31.508285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:46.761 [2024-09-30 22:02:31.508291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:46.761 [2024-09-30 22:02:31.508393] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 55.074 ms, result 0 00:20:46.761 true 00:20:46.761 22:02:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 88254 00:20:46.761 22:02:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid88254 00:20:46.761 22:02:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:20:47.020 [2024-09-30 22:02:31.577464] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:20:47.020 [2024-09-30 22:02:31.577547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88805 ] 00:20:47.020 [2024-09-30 22:02:31.699269] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:47.020 [2024-09-30 22:02:31.718708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.020 [2024-09-30 22:02:31.750406] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.194  Copying: 260/1024 [MB] (260 MBps) Copying: 521/1024 [MB] (260 MBps) Copying: 780/1024 [MB] (259 MBps) Copying: 1024/1024 [MB] (average 259 MBps) 00:20:51.194 00:20:51.194 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 88254 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:20:51.194 22:02:35 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:51.194 [2024-09-30 22:02:35.972562] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:20:51.194 [2024-09-30 22:02:35.972682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88854 ] 00:20:51.453 [2024-09-30 22:02:36.100876] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:51.453 [2024-09-30 22:02:36.119145] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.453 [2024-09-30 22:02:36.151879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.453 [2024-09-30 22:02:36.237465] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.453 [2024-09-30 22:02:36.237521] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.712 [2024-09-30 22:02:36.299359] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:51.712 [2024-09-30 22:02:36.299650] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:51.712 [2024-09-30 22:02:36.300415] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:51.712 [2024-09-30 22:02:36.466177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.466220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:51.712 [2024-09-30 22:02:36.466230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:51.712 [2024-09-30 22:02:36.466236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.466270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.466279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.712 [2024-09-30 22:02:36.466285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:51.712 [2024-09-30 22:02:36.466293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.466305] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:51.712 [2024-09-30 22:02:36.466475] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:51.712 [2024-09-30 22:02:36.466489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.466495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.712 [2024-09-30 22:02:36.466501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:20:51.712 [2024-09-30 22:02:36.466511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.467511] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:51.712 [2024-09-30 22:02:36.469617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.469643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:51.712 [2024-09-30 22:02:36.469656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.108 ms 00:20:51.712 [2024-09-30 22:02:36.469662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.469704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.469712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:51.712 [2024-09-30 22:02:36.469718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:51.712 [2024-09-30 22:02:36.469723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.474502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.474524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.712 [2024-09-30 22:02:36.474532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:20:51.712 [2024-09-30 22:02:36.474538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.474600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.474607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.712 [2024-09-30 22:02:36.474613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:51.712 [2024-09-30 22:02:36.474618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.474653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.474660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:51.712 [2024-09-30 22:02:36.474668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.712 [2024-09-30 22:02:36.474674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.474694] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:51.712 [2024-09-30 22:02:36.475912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.475933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.712 [2024-09-30 22:02:36.475940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:20:51.712 [2024-09-30 22:02:36.475949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.475979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.475985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:51.712 [2024-09-30 22:02:36.475992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:51.712 [2024-09-30 22:02:36.475997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.476010] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:51.712 [2024-09-30 22:02:36.476024] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:51.712 [2024-09-30 22:02:36.476053] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:51.712 [2024-09-30 22:02:36.476076] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:51.712 [2024-09-30 22:02:36.476158] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:51.712 [2024-09-30 22:02:36.476166] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:51.712 [2024-09-30 22:02:36.476174] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:51.712 [2024-09-30 22:02:36.476202] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476210] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476216] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:51.712 [2024-09-30 22:02:36.476221] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:51.712 [2024-09-30 22:02:36.476227] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:51.712 [2024-09-30 22:02:36.476235] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:51.712 [2024-09-30 22:02:36.476242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.476248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:51.712 [2024-09-30 22:02:36.476254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:20:51.712 [2024-09-30 22:02:36.476259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.476322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.712 [2024-09-30 22:02:36.476328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:51.712 [2024-09-30 22:02:36.476336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:51.712 [2024-09-30 22:02:36.476341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.712 [2024-09-30 22:02:36.476411] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:51.712 [2024-09-30 22:02:36.476420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:51.712 [2024-09-30 22:02:36.476426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:51.712 [2024-09-30 22:02:36.476450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:51.712 [2024-09-30 22:02:36.476466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.712 [2024-09-30 22:02:36.476476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:51.712 [2024-09-30 22:02:36.476481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:51.712 [2024-09-30 22:02:36.476487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.712 [2024-09-30 22:02:36.476492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:51.712 [2024-09-30 22:02:36.476502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:51.712 [2024-09-30 22:02:36.476508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:51.712 [2024-09-30 22:02:36.476518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:51.712 [2024-09-30 22:02:36.476533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:51.712 [2024-09-30 22:02:36.476548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:51.712 [2024-09-30 22:02:36.476563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:51.712 [2024-09-30 22:02:36.476581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:51.712 [2024-09-30 22:02:36.476602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.712 [2024-09-30 22:02:36.476614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:51.712 [2024-09-30 22:02:36.476620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:51.712 [2024-09-30 22:02:36.476625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.712 [2024-09-30 22:02:36.476631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:51.712 [2024-09-30 22:02:36.476637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:51.712 [2024-09-30 22:02:36.476642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:51.712 [2024-09-30 22:02:36.476654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:51.712 [2024-09-30 22:02:36.476659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476665] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:51.712 [2024-09-30 22:02:36.476673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:51.712 [2024-09-30 22:02:36.476679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.712 [2024-09-30 22:02:36.476693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:51.712 [2024-09-30 22:02:36.476700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:51.712 [2024-09-30 22:02:36.476705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:51.712 [2024-09-30 22:02:36.476711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:51.712 [2024-09-30 22:02:36.476717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:51.712 [2024-09-30 22:02:36.476723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:51.712 [2024-09-30 22:02:36.476729] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:51.712 [2024-09-30 22:02:36.476739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.712 [2024-09-30 22:02:36.476746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:51.713 [2024-09-30 22:02:36.476752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:51.713 [2024-09-30 22:02:36.476758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:51.713 [2024-09-30 22:02:36.476765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:51.713 [2024-09-30 22:02:36.476771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:51.713 [2024-09-30 22:02:36.476777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:51.713 [2024-09-30 22:02:36.476783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:51.713 [2024-09-30 22:02:36.476790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:51.713 [2024-09-30 22:02:36.476797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:51.713 [2024-09-30 22:02:36.476803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:51.713 [2024-09-30 22:02:36.476809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:51.713 [2024-09-30 22:02:36.476815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:51.713 [2024-09-30 22:02:36.476820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:51.713 [2024-09-30 22:02:36.476827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:51.713 [2024-09-30 22:02:36.476833] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:51.713 [2024-09-30 22:02:36.476840] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.713 [2024-09-30 22:02:36.476848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:51.713 [2024-09-30 22:02:36.476855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:51.713 [2024-09-30 22:02:36.476861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:51.713 [2024-09-30 22:02:36.476868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:51.713 [2024-09-30 22:02:36.476874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.476881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:51.713 [2024-09-30 22:02:36.476887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:20:51.713 [2024-09-30 22:02:36.476895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.493099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.493130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.713 [2024-09-30 22:02:36.493139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.172 ms 00:20:51.713 [2024-09-30 22:02:36.493145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.493225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.493234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:51.713 [2024-09-30 22:02:36.493241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:51.713 [2024-09-30 22:02:36.493246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.501927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.501962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.713 [2024-09-30 22:02:36.501975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.637 ms 00:20:51.713 [2024-09-30 22:02:36.501984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.502024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.502038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.713 [2024-09-30 22:02:36.502049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:51.713 [2024-09-30 22:02:36.502057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.502432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.502459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.713 [2024-09-30 22:02:36.502476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:20:51.713 [2024-09-30 22:02:36.502485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.502637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.502656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.713 [2024-09-30 22:02:36.502668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:51.713 [2024-09-30 22:02:36.502678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.507777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.507808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.713 [2024-09-30 22:02:36.507819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.072 ms 00:20:51.713 [2024-09-30 22:02:36.507829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.510121] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:51.713 [2024-09-30 22:02:36.510148] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:51.713 [2024-09-30 22:02:36.510160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.510166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:51.713 [2024-09-30 22:02:36.510174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.227 ms 00:20:51.713 [2024-09-30 22:02:36.510179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.521378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.521403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:51.713 [2024-09-30 22:02:36.521412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.158 ms 00:20:51.713 [2024-09-30 22:02:36.521418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.713 [2024-09-30 22:02:36.522946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.713 [2024-09-30 22:02:36.522969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:51.713 [2024-09-30 22:02:36.522977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:20:51.713 [2024-09-30 22:02:36.522982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.971 [2024-09-30 22:02:36.524337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.971 [2024-09-30 22:02:36.524359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:51.971 [2024-09-30 22:02:36.524367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.330 ms 00:20:51.971 [2024-09-30 22:02:36.524373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.971 [2024-09-30 22:02:36.524609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.971 [2024-09-30 22:02:36.524624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:51.971 [2024-09-30 22:02:36.524633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:20:51.971 [2024-09-30 22:02:36.524638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.971 [2024-09-30 22:02:36.539268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.971 [2024-09-30 22:02:36.539300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:51.971 [2024-09-30 22:02:36.539308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.618 ms 00:20:51.971 [2024-09-30 22:02:36.539315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.971 [2024-09-30 22:02:36.545090] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:51.971 [2024-09-30 22:02:36.547133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.971 [2024-09-30 22:02:36.547155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:51.971 [2024-09-30 22:02:36.547163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.787 ms 00:20:51.971 [2024-09-30 22:02:36.547170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.971 [2024-09-30 22:02:36.547223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.971 [2024-09-30 22:02:36.547232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:51.971 [2024-09-30 22:02:36.547240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:51.971 [2024-09-30 22:02:36.547250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.971 [2024-09-30 22:02:36.547301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.971 [2024-09-30 22:02:36.547309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:51.971 [2024-09-30 22:02:36.547316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:51.971 [2024-09-30 22:02:36.547322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.971 [2024-09-30 22:02:36.547337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.971 [2024-09-30 22:02:36.547344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:51.971 [2024-09-30 22:02:36.547354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.971 [2024-09-30 22:02:36.547360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.971 [2024-09-30 22:02:36.547388] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:51.972 [2024-09-30 22:02:36.547395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.972 [2024-09-30 22:02:36.547401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:51.972 [2024-09-30 22:02:36.547407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:51.972 [2024-09-30 22:02:36.547413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.972 [2024-09-30 22:02:36.550303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.972 [2024-09-30 22:02:36.550328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:51.972 [2024-09-30 22:02:36.550336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.877 ms 00:20:51.972 [2024-09-30 22:02:36.550342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.972 [2024-09-30 22:02:36.550397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.972 [2024-09-30 22:02:36.550404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:51.972 [2024-09-30 22:02:36.550410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:51.972 [2024-09-30 22:02:36.550416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.972 [2024-09-30 22:02:36.551386] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 84.878 ms, result 0 00:21:15.386  Copying: 45/1024 [MB] (45 MBps) Copying: 97/1024 [MB] (52 MBps) Copying: 123/1024 [MB] (25 MBps) Copying: 158/1024 [MB] (35 MBps) Copying: 204/1024 [MB] (45 MBps) Copying: 250/1024 [MB] (46 MBps) Copying: 295/1024 [MB] (45 MBps) Copying: 339/1024 [MB] (44 MBps) Copying: 385/1024 [MB] (45 MBps) Copying: 431/1024 [MB] (46 MBps) Copying: 477/1024 [MB] (45 MBps) Copying: 523/1024 [MB] (46 MBps) Copying: 570/1024 [MB] (46 MBps) Copying: 617/1024 [MB] (47 MBps) Copying: 665/1024 [MB] (47 MBps) Copying: 714/1024 [MB] (49 MBps) Copying: 760/1024 [MB] (45 MBps) Copying: 805/1024 [MB] (45 MBps) Copying: 851/1024 [MB] (46 MBps) Copying: 897/1024 [MB] (46 MBps) Copying: 943/1024 [MB] (45 MBps) Copying: 991/1024 [MB] (47 MBps) Copying: 1023/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 43 MBps)[2024-09-30 22:02:59.974463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:02:59.974514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:15.386 [2024-09-30 22:02:59.974525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:15.386 [2024-09-30 22:02:59.974538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:02:59.976132] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:15.386 [2024-09-30 22:02:59.979031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:02:59.979062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:15.386 [2024-09-30 22:02:59.979070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.858 ms 00:21:15.386 [2024-09-30 22:02:59.979077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:02:59.987748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:02:59.987779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:15.386 [2024-09-30 22:02:59.987787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.417 ms 00:21:15.386 [2024-09-30 22:02:59.987793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.003322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.003361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:15.386 [2024-09-30 22:03:00.003369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.517 ms 00:21:15.386 [2024-09-30 22:03:00.003376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.008151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.008179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:15.386 [2024-09-30 22:03:00.008193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.752 ms 00:21:15.386 [2024-09-30 22:03:00.008201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.009174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.009219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:15.386 [2024-09-30 22:03:00.009227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:21:15.386 [2024-09-30 22:03:00.009233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.012283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.012333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:15.386 [2024-09-30 22:03:00.012346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:21:15.386 [2024-09-30 22:03:00.012360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.061329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.061373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:15.386 [2024-09-30 22:03:00.061382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.847 ms 00:21:15.386 [2024-09-30 22:03:00.061389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.062885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.062914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:15.386 [2024-09-30 22:03:00.062921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.484 ms 00:21:15.386 [2024-09-30 22:03:00.062927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.063914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.063951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:15.386 [2024-09-30 22:03:00.063958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.962 ms 00:21:15.386 [2024-09-30 22:03:00.063964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.064789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.064818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:15.386 [2024-09-30 22:03:00.064825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:21:15.386 [2024-09-30 22:03:00.064831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.065582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.386 [2024-09-30 22:03:00.065610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:15.386 [2024-09-30 22:03:00.065617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:21:15.386 [2024-09-30 22:03:00.065623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.386 [2024-09-30 22:03:00.065646] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:15.386 [2024-09-30 22:03:00.065657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122112 / 261120 wr_cnt: 1 state: open 00:21:15.386 [2024-09-30 22:03:00.065674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:15.386 [2024-09-30 22:03:00.065815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.065998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:15.387 [2024-09-30 22:03:00.066288] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:15.387 [2024-09-30 22:03:00.066295] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a2be95a-d5b1-4b79-bff0-a31df33a7add 00:21:15.387 [2024-09-30 22:03:00.066301] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122112 00:21:15.387 [2024-09-30 22:03:00.066307] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123072 00:21:15.387 [2024-09-30 22:03:00.066312] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122112 00:21:15.387 [2024-09-30 22:03:00.066319] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:21:15.387 [2024-09-30 22:03:00.066329] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:15.387 [2024-09-30 22:03:00.066335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:15.387 [2024-09-30 22:03:00.066340] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:15.387 [2024-09-30 22:03:00.066345] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:15.387 [2024-09-30 22:03:00.066351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:15.387 [2024-09-30 22:03:00.066357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.387 [2024-09-30 22:03:00.066363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:15.387 [2024-09-30 22:03:00.066369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.712 ms 00:21:15.387 [2024-09-30 22:03:00.066377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.387 [2024-09-30 22:03:00.067704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.387 [2024-09-30 22:03:00.067731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:15.387 [2024-09-30 22:03:00.067739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.316 ms 00:21:15.387 [2024-09-30 22:03:00.067745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.387 [2024-09-30 22:03:00.067814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.388 [2024-09-30 22:03:00.067824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:15.388 [2024-09-30 22:03:00.067830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:15.388 [2024-09-30 22:03:00.067837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.071938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.071962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:15.388 [2024-09-30 22:03:00.071969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.071976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.072015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.072030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:15.388 [2024-09-30 22:03:00.072051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.072057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.072104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.072112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:15.388 [2024-09-30 22:03:00.072118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.072124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.072135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.072145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:15.388 [2024-09-30 22:03:00.072153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.072159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.080365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.080399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:15.388 [2024-09-30 22:03:00.080407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.080413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.086967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.087007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:15.388 [2024-09-30 22:03:00.087015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.087021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.087056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.087063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:15.388 [2024-09-30 22:03:00.087069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.087074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.087092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.087099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:15.388 [2024-09-30 22:03:00.087104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.087112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.087159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.087167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:15.388 [2024-09-30 22:03:00.087173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.087179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.087214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.087222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:15.388 [2024-09-30 22:03:00.087228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.087234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.087266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.087273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:15.388 [2024-09-30 22:03:00.087279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.087285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.087317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:15.388 [2024-09-30 22:03:00.087334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:15.388 [2024-09-30 22:03:00.087340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:15.388 [2024-09-30 22:03:00.087349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.388 [2024-09-30 22:03:00.087442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 113.670 ms, result 0 00:21:16.761 00:21:16.761 00:21:16.761 22:03:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:19.292 22:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:19.292 [2024-09-30 22:03:03.640488] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:21:19.292 [2024-09-30 22:03:03.640583] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89139 ] 00:21:19.292 [2024-09-30 22:03:03.763352] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:19.292 [2024-09-30 22:03:03.782366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.292 [2024-09-30 22:03:03.814211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.292 [2024-09-30 22:03:03.898919] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:19.292 [2024-09-30 22:03:03.898977] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:19.292 [2024-09-30 22:03:04.045629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.045671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:19.292 [2024-09-30 22:03:04.045681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:19.292 [2024-09-30 22:03:04.045689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.045724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.045732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:19.292 [2024-09-30 22:03:04.045738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:19.292 [2024-09-30 22:03:04.045743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.045757] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:19.292 [2024-09-30 22:03:04.045970] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:19.292 [2024-09-30 22:03:04.045983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.045989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:19.292 [2024-09-30 22:03:04.045995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:21:19.292 [2024-09-30 22:03:04.046002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.046968] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:19.292 [2024-09-30 22:03:04.049069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.049106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:19.292 [2024-09-30 22:03:04.049117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.102 ms 00:21:19.292 [2024-09-30 22:03:04.049124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.049168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.049175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:19.292 [2024-09-30 22:03:04.049182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:19.292 [2024-09-30 22:03:04.049200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.053937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.053966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:19.292 [2024-09-30 22:03:04.053978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.698 ms 00:21:19.292 [2024-09-30 22:03:04.053986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.054038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.054044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:19.292 [2024-09-30 22:03:04.054052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:19.292 [2024-09-30 22:03:04.054058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.054092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.054099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:19.292 [2024-09-30 22:03:04.054105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:19.292 [2024-09-30 22:03:04.054114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.054129] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:19.292 [2024-09-30 22:03:04.055347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.055370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:19.292 [2024-09-30 22:03:04.055377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.221 ms 00:21:19.292 [2024-09-30 22:03:04.055382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.055408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.055419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:19.292 [2024-09-30 22:03:04.055425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:19.292 [2024-09-30 22:03:04.055434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.055449] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:19.292 [2024-09-30 22:03:04.055463] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:19.292 [2024-09-30 22:03:04.055489] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:19.292 [2024-09-30 22:03:04.055500] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:19.292 [2024-09-30 22:03:04.055578] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:19.292 [2024-09-30 22:03:04.055591] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:19.292 [2024-09-30 22:03:04.055599] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:19.292 [2024-09-30 22:03:04.055609] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:19.292 [2024-09-30 22:03:04.055615] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:19.292 [2024-09-30 22:03:04.055622] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:19.292 [2024-09-30 22:03:04.055628] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:19.292 [2024-09-30 22:03:04.055633] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:19.292 [2024-09-30 22:03:04.055638] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:19.292 [2024-09-30 22:03:04.055643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.055652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:19.292 [2024-09-30 22:03:04.055662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:21:19.292 [2024-09-30 22:03:04.055668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.055733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.292 [2024-09-30 22:03:04.055740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:19.292 [2024-09-30 22:03:04.055746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:19.292 [2024-09-30 22:03:04.055752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.292 [2024-09-30 22:03:04.055827] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:19.292 [2024-09-30 22:03:04.055840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:19.292 [2024-09-30 22:03:04.055846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:19.292 [2024-09-30 22:03:04.055852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.292 [2024-09-30 22:03:04.055859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:19.292 [2024-09-30 22:03:04.055864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:19.292 [2024-09-30 22:03:04.055869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:19.292 [2024-09-30 22:03:04.055874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:19.292 [2024-09-30 22:03:04.055884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:19.292 [2024-09-30 22:03:04.055889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:19.292 [2024-09-30 22:03:04.055898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:19.292 [2024-09-30 22:03:04.055903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:19.292 [2024-09-30 22:03:04.055907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:19.292 [2024-09-30 22:03:04.055913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:19.292 [2024-09-30 22:03:04.055918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:19.292 [2024-09-30 22:03:04.055923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.292 [2024-09-30 22:03:04.055928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:19.292 [2024-09-30 22:03:04.055933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:19.292 [2024-09-30 22:03:04.055938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.292 [2024-09-30 22:03:04.055943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:19.292 [2024-09-30 22:03:04.055948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:19.292 [2024-09-30 22:03:04.055953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:19.292 [2024-09-30 22:03:04.055958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:19.293 [2024-09-30 22:03:04.055963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:19.293 [2024-09-30 22:03:04.055968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:19.293 [2024-09-30 22:03:04.055973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:19.293 [2024-09-30 22:03:04.055979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:19.293 [2024-09-30 22:03:04.055984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:19.293 [2024-09-30 22:03:04.055990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:19.293 [2024-09-30 22:03:04.055995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:19.293 [2024-09-30 22:03:04.056000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:19.293 [2024-09-30 22:03:04.056007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:19.293 [2024-09-30 22:03:04.056013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:19.293 [2024-09-30 22:03:04.056018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:19.293 [2024-09-30 22:03:04.056024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:19.293 [2024-09-30 22:03:04.056038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:19.293 [2024-09-30 22:03:04.056044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:19.293 [2024-09-30 22:03:04.056050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:19.293 [2024-09-30 22:03:04.056056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:19.293 [2024-09-30 22:03:04.056061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.293 [2024-09-30 22:03:04.056067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:19.293 [2024-09-30 22:03:04.056073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:19.293 [2024-09-30 22:03:04.056080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.293 [2024-09-30 22:03:04.056086] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:19.293 [2024-09-30 22:03:04.056093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:19.293 [2024-09-30 22:03:04.056101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:19.293 [2024-09-30 22:03:04.056107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:19.293 [2024-09-30 22:03:04.056114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:19.293 [2024-09-30 22:03:04.056120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:19.293 [2024-09-30 22:03:04.056126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:19.293 [2024-09-30 22:03:04.056132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:19.293 [2024-09-30 22:03:04.056138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:19.293 [2024-09-30 22:03:04.056144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:19.293 [2024-09-30 22:03:04.056151] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:19.293 [2024-09-30 22:03:04.056160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:19.293 [2024-09-30 22:03:04.056167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:19.293 [2024-09-30 22:03:04.056174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:19.293 [2024-09-30 22:03:04.056180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:19.293 [2024-09-30 22:03:04.056198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:19.293 [2024-09-30 22:03:04.056205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:19.293 [2024-09-30 22:03:04.056211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:19.293 [2024-09-30 22:03:04.056218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:19.293 [2024-09-30 22:03:04.056224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:19.293 [2024-09-30 22:03:04.056231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:19.293 [2024-09-30 22:03:04.056237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:19.293 [2024-09-30 22:03:04.056243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:19.293 [2024-09-30 22:03:04.056250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:19.293 [2024-09-30 22:03:04.056256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:19.293 [2024-09-30 22:03:04.056262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:19.293 [2024-09-30 22:03:04.056268] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:19.293 [2024-09-30 22:03:04.056275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:19.293 [2024-09-30 22:03:04.056282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:19.293 [2024-09-30 22:03:04.056290] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:19.293 [2024-09-30 22:03:04.056296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:19.293 [2024-09-30 22:03:04.056305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:19.293 [2024-09-30 22:03:04.056312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.056318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:19.293 [2024-09-30 22:03:04.056326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:21:19.293 [2024-09-30 22:03:04.056333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.073162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.073208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:19.293 [2024-09-30 22:03:04.073217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.792 ms 00:21:19.293 [2024-09-30 22:03:04.073223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.073291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.073303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:19.293 [2024-09-30 22:03:04.073309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:19.293 [2024-09-30 22:03:04.073317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.081990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.082032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:19.293 [2024-09-30 22:03:04.082043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.629 ms 00:21:19.293 [2024-09-30 22:03:04.082052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.082087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.082097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:19.293 [2024-09-30 22:03:04.082112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:19.293 [2024-09-30 22:03:04.082125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.082503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.082531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:19.293 [2024-09-30 22:03:04.082542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:21:19.293 [2024-09-30 22:03:04.082552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.082698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.082717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:19.293 [2024-09-30 22:03:04.082728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:19.293 [2024-09-30 22:03:04.082743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.087761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.087804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:19.293 [2024-09-30 22:03:04.087820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.992 ms 00:21:19.293 [2024-09-30 22:03:04.087829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.090034] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:19.293 [2024-09-30 22:03:04.090065] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:19.293 [2024-09-30 22:03:04.090078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.090084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:19.293 [2024-09-30 22:03:04.090090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.139 ms 00:21:19.293 [2024-09-30 22:03:04.090100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.293 [2024-09-30 22:03:04.101510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.293 [2024-09-30 22:03:04.101547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:19.293 [2024-09-30 22:03:04.101557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.108 ms 00:21:19.293 [2024-09-30 22:03:04.101563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.553 [2024-09-30 22:03:04.103106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.553 [2024-09-30 22:03:04.103136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:19.553 [2024-09-30 22:03:04.103143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.524 ms 00:21:19.553 [2024-09-30 22:03:04.103149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.553 [2024-09-30 22:03:04.104442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.553 [2024-09-30 22:03:04.104470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:19.553 [2024-09-30 22:03:04.104476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:21:19.553 [2024-09-30 22:03:04.104481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.553 [2024-09-30 22:03:04.104717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.553 [2024-09-30 22:03:04.104731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:19.553 [2024-09-30 22:03:04.104738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:21:19.553 [2024-09-30 22:03:04.104743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.553 [2024-09-30 22:03:04.119384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.553 [2024-09-30 22:03:04.119423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:19.553 [2024-09-30 22:03:04.119432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.626 ms 00:21:19.553 [2024-09-30 22:03:04.119439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.553 [2024-09-30 22:03:04.125259] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:19.553 [2024-09-30 22:03:04.127223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.553 [2024-09-30 22:03:04.127251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:19.553 [2024-09-30 22:03:04.127259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.754 ms 00:21:19.553 [2024-09-30 22:03:04.127270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.554 [2024-09-30 22:03:04.127312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.554 [2024-09-30 22:03:04.127321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:19.554 [2024-09-30 22:03:04.127329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:19.554 [2024-09-30 22:03:04.127336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.554 [2024-09-30 22:03:04.128613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.554 [2024-09-30 22:03:04.128642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:19.554 [2024-09-30 22:03:04.128649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:21:19.554 [2024-09-30 22:03:04.128657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.554 [2024-09-30 22:03:04.128675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.554 [2024-09-30 22:03:04.128681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:19.554 [2024-09-30 22:03:04.128687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:19.554 [2024-09-30 22:03:04.128693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.554 [2024-09-30 22:03:04.128719] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:19.554 [2024-09-30 22:03:04.128726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.554 [2024-09-30 22:03:04.128732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:19.554 [2024-09-30 22:03:04.128738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:19.554 [2024-09-30 22:03:04.128743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.554 [2024-09-30 22:03:04.131846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.554 [2024-09-30 22:03:04.131881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:19.554 [2024-09-30 22:03:04.131892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.089 ms 00:21:19.554 [2024-09-30 22:03:04.131899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.554 [2024-09-30 22:03:04.131952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:19.554 [2024-09-30 22:03:04.131963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:19.554 [2024-09-30 22:03:04.131970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:19.554 [2024-09-30 22:03:04.131977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:19.554 [2024-09-30 22:03:04.132722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 86.782 ms, result 0 00:21:41.373  Copying: 1472/1048576 [kB] (1472 kBps) Copying: 12/1024 [MB] (10 MBps) Copying: 66/1024 [MB] (54 MBps) Copying: 122/1024 [MB] (56 MBps) Copying: 178/1024 [MB] (56 MBps) Copying: 233/1024 [MB] (54 MBps) Copying: 286/1024 [MB] (53 MBps) Copying: 339/1024 [MB] (52 MBps) Copying: 393/1024 [MB] (54 MBps) Copying: 451/1024 [MB] (57 MBps) Copying: 509/1024 [MB] (58 MBps) Copying: 567/1024 [MB] (57 MBps) Copying: 620/1024 [MB] (53 MBps) Copying: 671/1024 [MB] (51 MBps) Copying: 726/1024 [MB] (54 MBps) Copying: 780/1024 [MB] (54 MBps) Copying: 834/1024 [MB] (54 MBps) Copying: 888/1024 [MB] (53 MBps) Copying: 943/1024 [MB] (55 MBps) Copying: 997/1024 [MB] (53 MBps) Copying: 1024/1024 [MB] (average 49 MBps)[2024-09-30 22:03:25.764386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.373 [2024-09-30 22:03:25.764449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.373 [2024-09-30 22:03:25.764463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:41.373 [2024-09-30 22:03:25.764471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.373 [2024-09-30 22:03:25.764492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.373 [2024-09-30 22:03:25.764944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.373 [2024-09-30 22:03:25.764977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.373 [2024-09-30 22:03:25.764986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:21:41.373 [2024-09-30 22:03:25.764998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.373 [2024-09-30 22:03:25.765224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.373 [2024-09-30 22:03:25.765236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.373 [2024-09-30 22:03:25.765245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:21:41.373 [2024-09-30 22:03:25.765254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.373 [2024-09-30 22:03:25.775368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.373 [2024-09-30 22:03:25.775405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.373 [2024-09-30 22:03:25.775416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.099 ms 00:21:41.373 [2024-09-30 22:03:25.775430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.373 [2024-09-30 22:03:25.781597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.373 [2024-09-30 22:03:25.781630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:41.373 [2024-09-30 22:03:25.781641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:21:41.373 [2024-09-30 22:03:25.781650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.373 [2024-09-30 22:03:25.783115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.373 [2024-09-30 22:03:25.783149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.373 [2024-09-30 22:03:25.783158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:21:41.373 [2024-09-30 22:03:25.783165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.373 [2024-09-30 22:03:25.786969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.373 [2024-09-30 22:03:25.787006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.373 [2024-09-30 22:03:25.787022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:21:41.373 [2024-09-30 22:03:25.787029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.373 [2024-09-30 22:03:25.788368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.374 [2024-09-30 22:03:25.788403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:41.374 [2024-09-30 22:03:25.788412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.318 ms 00:21:41.374 [2024-09-30 22:03:25.788420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-09-30 22:03:25.790164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.374 [2024-09-30 22:03:25.790206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:41.374 [2024-09-30 22:03:25.790216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:21:41.374 [2024-09-30 22:03:25.790224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-09-30 22:03:25.791322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.374 [2024-09-30 22:03:25.791355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:41.374 [2024-09-30 22:03:25.791364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.070 ms 00:21:41.374 [2024-09-30 22:03:25.791370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-09-30 22:03:25.792477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.374 [2024-09-30 22:03:25.792514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.374 [2024-09-30 22:03:25.792531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:21:41.374 [2024-09-30 22:03:25.792539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-09-30 22:03:25.793554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.374 [2024-09-30 22:03:25.793584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:41.374 [2024-09-30 22:03:25.793593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.963 ms 00:21:41.374 [2024-09-30 22:03:25.793600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.374 [2024-09-30 22:03:25.793626] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:41.374 [2024-09-30 22:03:25.793640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:41.374 [2024-09-30 22:03:25.793650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:41.374 [2024-09-30 22:03:25.793658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.793999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:41.374 [2024-09-30 22:03:25.794173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:41.375 [2024-09-30 22:03:25.794390] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:41.375 [2024-09-30 22:03:25.794405] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a2be95a-d5b1-4b79-bff0-a31df33a7add 00:21:41.375 [2024-09-30 22:03:25.794415] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:41.375 [2024-09-30 22:03:25.794422] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 142528 00:21:41.375 [2024-09-30 22:03:25.794429] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 140544 00:21:41.375 [2024-09-30 22:03:25.794441] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0141 00:21:41.375 [2024-09-30 22:03:25.794448] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:41.375 [2024-09-30 22:03:25.794456] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:41.375 [2024-09-30 22:03:25.794463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:41.375 [2024-09-30 22:03:25.794470] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:41.375 [2024-09-30 22:03:25.794476] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:41.375 [2024-09-30 22:03:25.794483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.375 [2024-09-30 22:03:25.794490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:41.375 [2024-09-30 22:03:25.794498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:21:41.375 [2024-09-30 22:03:25.794504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.795967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.375 [2024-09-30 22:03:25.795992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:41.375 [2024-09-30 22:03:25.796009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.448 ms 00:21:41.375 [2024-09-30 22:03:25.796017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.796099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.375 [2024-09-30 22:03:25.796112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:41.375 [2024-09-30 22:03:25.796124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:41.375 [2024-09-30 22:03:25.796134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.800686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.800719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.375 [2024-09-30 22:03:25.800728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.800736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.800786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.800794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.375 [2024-09-30 22:03:25.800802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.800811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.800850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.800864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.375 [2024-09-30 22:03:25.800872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.800879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.800898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.800906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.375 [2024-09-30 22:03:25.800913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.800920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.809894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.809932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.375 [2024-09-30 22:03:25.809941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.809949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.819998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.820041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.375 [2024-09-30 22:03:25.820050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.820063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.820105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.820114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:41.375 [2024-09-30 22:03:25.820122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.820129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.820161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.820169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:41.375 [2024-09-30 22:03:25.820177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.820197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.820260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.820269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:41.375 [2024-09-30 22:03:25.820277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.820285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.820315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.820333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:41.375 [2024-09-30 22:03:25.820341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.820348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.820383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.820391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:41.375 [2024-09-30 22:03:25.820399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.820406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.820442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:41.375 [2024-09-30 22:03:25.820450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:41.375 [2024-09-30 22:03:25.820458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:41.375 [2024-09-30 22:03:25.820465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.375 [2024-09-30 22:03:25.820572] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.162 ms, result 0 00:21:41.375 00:21:41.375 00:21:41.375 22:03:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:43.918 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:43.918 22:03:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:43.918 [2024-09-30 22:03:28.216686] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:21:43.918 [2024-09-30 22:03:28.216801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89399 ] 00:21:43.918 [2024-09-30 22:03:28.344628] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:43.918 [2024-09-30 22:03:28.362315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.918 [2024-09-30 22:03:28.396118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.918 [2024-09-30 22:03:28.483646] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.918 [2024-09-30 22:03:28.483709] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:43.918 [2024-09-30 22:03:28.636897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.636942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:43.918 [2024-09-30 22:03:28.636954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:43.918 [2024-09-30 22:03:28.636962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.637003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.637013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.918 [2024-09-30 22:03:28.637024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:43.918 [2024-09-30 22:03:28.637032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.637050] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:43.918 [2024-09-30 22:03:28.637289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:43.918 [2024-09-30 22:03:28.637303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.637310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.918 [2024-09-30 22:03:28.637319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:21:43.918 [2024-09-30 22:03:28.637328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.638714] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:43.918 [2024-09-30 22:03:28.640973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.641090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:43.918 [2024-09-30 22:03:28.641157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.261 ms 00:21:43.918 [2024-09-30 22:03:28.641180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.641304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.641335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:43.918 [2024-09-30 22:03:28.641355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:43.918 [2024-09-30 22:03:28.641406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.646332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.646438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.918 [2024-09-30 22:03:28.646493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.847 ms 00:21:43.918 [2024-09-30 22:03:28.646515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.646628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.646652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.918 [2024-09-30 22:03:28.646671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:43.918 [2024-09-30 22:03:28.646715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.646804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.646831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:43.918 [2024-09-30 22:03:28.646882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:43.918 [2024-09-30 22:03:28.646932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.646969] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:43.918 [2024-09-30 22:03:28.648371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.648462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.918 [2024-09-30 22:03:28.648511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.408 ms 00:21:43.918 [2024-09-30 22:03:28.648539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.648582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.918 [2024-09-30 22:03:28.648635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:43.918 [2024-09-30 22:03:28.648658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:43.918 [2024-09-30 22:03:28.648679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.918 [2024-09-30 22:03:28.648734] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:43.918 [2024-09-30 22:03:28.648766] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:43.918 [2024-09-30 22:03:28.648858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:43.918 [2024-09-30 22:03:28.648894] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:43.918 [2024-09-30 22:03:28.649060] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:43.918 [2024-09-30 22:03:28.649127] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:43.918 [2024-09-30 22:03:28.649315] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:43.918 [2024-09-30 22:03:28.649337] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649346] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649357] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:43.919 [2024-09-30 22:03:28.649364] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:43.919 [2024-09-30 22:03:28.649372] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:43.919 [2024-09-30 22:03:28.649378] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:43.919 [2024-09-30 22:03:28.649386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.919 [2024-09-30 22:03:28.649394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:43.919 [2024-09-30 22:03:28.649402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:21:43.919 [2024-09-30 22:03:28.649414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.919 [2024-09-30 22:03:28.649507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.919 [2024-09-30 22:03:28.649515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:43.919 [2024-09-30 22:03:28.649522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:43.919 [2024-09-30 22:03:28.649533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.919 [2024-09-30 22:03:28.649628] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:43.919 [2024-09-30 22:03:28.649638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:43.919 [2024-09-30 22:03:28.649646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:43.919 [2024-09-30 22:03:28.649667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:43.919 [2024-09-30 22:03:28.649693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.919 [2024-09-30 22:03:28.649710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:43.919 [2024-09-30 22:03:28.649717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:43.919 [2024-09-30 22:03:28.649723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:43.919 [2024-09-30 22:03:28.649730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:43.919 [2024-09-30 22:03:28.649737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:43.919 [2024-09-30 22:03:28.649743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:43.919 [2024-09-30 22:03:28.649756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:43.919 [2024-09-30 22:03:28.649776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:43.919 [2024-09-30 22:03:28.649797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:43.919 [2024-09-30 22:03:28.649823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:43.919 [2024-09-30 22:03:28.649846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:43.919 [2024-09-30 22:03:28.649868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.919 [2024-09-30 22:03:28.649883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:43.919 [2024-09-30 22:03:28.649890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:43.919 [2024-09-30 22:03:28.649897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:43.919 [2024-09-30 22:03:28.649905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:43.919 [2024-09-30 22:03:28.649912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:43.919 [2024-09-30 22:03:28.649920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:43.919 [2024-09-30 22:03:28.649934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:43.919 [2024-09-30 22:03:28.649943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649951] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:43.919 [2024-09-30 22:03:28.649960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:43.919 [2024-09-30 22:03:28.649968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:43.919 [2024-09-30 22:03:28.649976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:43.919 [2024-09-30 22:03:28.649984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:43.919 [2024-09-30 22:03:28.649993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:43.919 [2024-09-30 22:03:28.650000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:43.919 [2024-09-30 22:03:28.650007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:43.919 [2024-09-30 22:03:28.650015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:43.919 [2024-09-30 22:03:28.650022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:43.919 [2024-09-30 22:03:28.650031] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:43.919 [2024-09-30 22:03:28.650042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.919 [2024-09-30 22:03:28.650055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:43.919 [2024-09-30 22:03:28.650064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:43.919 [2024-09-30 22:03:28.650072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:43.919 [2024-09-30 22:03:28.650082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:43.919 [2024-09-30 22:03:28.650090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:43.919 [2024-09-30 22:03:28.650098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:43.919 [2024-09-30 22:03:28.650106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:43.919 [2024-09-30 22:03:28.650114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:43.919 [2024-09-30 22:03:28.650122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:43.919 [2024-09-30 22:03:28.650130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:43.919 [2024-09-30 22:03:28.650138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:43.919 [2024-09-30 22:03:28.650146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:43.919 [2024-09-30 22:03:28.650154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:43.919 [2024-09-30 22:03:28.650162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:43.919 [2024-09-30 22:03:28.650170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:43.919 [2024-09-30 22:03:28.650179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:43.919 [2024-09-30 22:03:28.650561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:43.919 [2024-09-30 22:03:28.650623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:43.919 [2024-09-30 22:03:28.650685] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:43.919 [2024-09-30 22:03:28.650766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:43.919 [2024-09-30 22:03:28.650798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.919 [2024-09-30 22:03:28.650887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:43.919 [2024-09-30 22:03:28.650911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.237 ms 00:21:43.919 [2024-09-30 22:03:28.650933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.919 [2024-09-30 22:03:28.675065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.919 [2024-09-30 22:03:28.675413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.919 [2024-09-30 22:03:28.675478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.038 ms 00:21:43.919 [2024-09-30 22:03:28.675501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.919 [2024-09-30 22:03:28.675748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.919 [2024-09-30 22:03:28.675784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:43.919 [2024-09-30 22:03:28.675808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:21:43.919 [2024-09-30 22:03:28.675832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.919 [2024-09-30 22:03:28.685543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.685650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.920 [2024-09-30 22:03:28.685664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.574 ms 00:21:43.920 [2024-09-30 22:03:28.685671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.685697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.685705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.920 [2024-09-30 22:03:28.685714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:43.920 [2024-09-30 22:03:28.685725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.686061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.686074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.920 [2024-09-30 22:03:28.686082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:21:43.920 [2024-09-30 22:03:28.686089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.686224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.686234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.920 [2024-09-30 22:03:28.686243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:43.920 [2024-09-30 22:03:28.686251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.690701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.690731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.920 [2024-09-30 22:03:28.690740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.430 ms 00:21:43.920 [2024-09-30 22:03:28.690747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.692953] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:43.920 [2024-09-30 22:03:28.693067] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:43.920 [2024-09-30 22:03:28.693084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.693092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:43.920 [2024-09-30 22:03:28.693105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.268 ms 00:21:43.920 [2024-09-30 22:03:28.693112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.707365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.707481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:43.920 [2024-09-30 22:03:28.707496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.223 ms 00:21:43.920 [2024-09-30 22:03:28.707504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.709119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.709149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:43.920 [2024-09-30 22:03:28.709158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:21:43.920 [2024-09-30 22:03:28.709165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.710560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.710663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:43.920 [2024-09-30 22:03:28.710675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:21:43.920 [2024-09-30 22:03:28.710682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.710983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.711000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:43.920 [2024-09-30 22:03:28.711009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:21:43.920 [2024-09-30 22:03:28.711018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.920 [2024-09-30 22:03:28.726167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.920 [2024-09-30 22:03:28.726306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:43.920 [2024-09-30 22:03:28.726327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.131 ms 00:21:43.920 [2024-09-30 22:03:28.726335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.178 [2024-09-30 22:03:28.733588] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:44.178 [2024-09-30 22:03:28.735934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.178 [2024-09-30 22:03:28.735965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:44.178 [2024-09-30 22:03:28.735976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.556 ms 00:21:44.178 [2024-09-30 22:03:28.735984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.178 [2024-09-30 22:03:28.736041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.178 [2024-09-30 22:03:28.736056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:44.178 [2024-09-30 22:03:28.736065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:44.178 [2024-09-30 22:03:28.736072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.178 [2024-09-30 22:03:28.736661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.178 [2024-09-30 22:03:28.736693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:44.178 [2024-09-30 22:03:28.736702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:21:44.178 [2024-09-30 22:03:28.736709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.178 [2024-09-30 22:03:28.736734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.178 [2024-09-30 22:03:28.736743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:44.178 [2024-09-30 22:03:28.736751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:44.178 [2024-09-30 22:03:28.736758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.178 [2024-09-30 22:03:28.736787] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:44.178 [2024-09-30 22:03:28.736796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.178 [2024-09-30 22:03:28.736803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:44.178 [2024-09-30 22:03:28.736816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:44.178 [2024-09-30 22:03:28.736824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.178 [2024-09-30 22:03:28.739970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.178 [2024-09-30 22:03:28.740092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:44.178 [2024-09-30 22:03:28.740148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:21:44.178 [2024-09-30 22:03:28.740170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.178 [2024-09-30 22:03:28.740295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:44.179 [2024-09-30 22:03:28.740352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:44.179 [2024-09-30 22:03:28.740441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:44.179 [2024-09-30 22:03:28.740474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:44.179 [2024-09-30 22:03:28.741637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 104.355 ms, result 0 00:22:05.163  Copying: 47/1024 [MB] (47 MBps) Copying: 96/1024 [MB] (49 MBps) Copying: 144/1024 [MB] (48 MBps) Copying: 192/1024 [MB] (48 MBps) Copying: 243/1024 [MB] (50 MBps) Copying: 293/1024 [MB] (50 MBps) Copying: 341/1024 [MB] (47 MBps) Copying: 392/1024 [MB] (50 MBps) Copying: 439/1024 [MB] (47 MBps) Copying: 486/1024 [MB] (46 MBps) Copying: 536/1024 [MB] (49 MBps) Copying: 585/1024 [MB] (49 MBps) Copying: 635/1024 [MB] (50 MBps) Copying: 684/1024 [MB] (49 MBps) Copying: 734/1024 [MB] (49 MBps) Copying: 783/1024 [MB] (49 MBps) Copying: 833/1024 [MB] (49 MBps) Copying: 881/1024 [MB] (48 MBps) Copying: 930/1024 [MB] (48 MBps) Copying: 977/1024 [MB] (47 MBps) Copying: 1024/1024 [MB] (average 48 MBps)[2024-09-30 22:03:49.962138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.163 [2024-09-30 22:03:49.962213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.164 [2024-09-30 22:03:49.962228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:05.164 [2024-09-30 22:03:49.962240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.164 [2024-09-30 22:03:49.962261] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:05.164 [2024-09-30 22:03:49.962717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.164 [2024-09-30 22:03:49.962746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.164 [2024-09-30 22:03:49.962755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:22:05.164 [2024-09-30 22:03:49.962762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.164 [2024-09-30 22:03:49.962978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.164 [2024-09-30 22:03:49.962988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.164 [2024-09-30 22:03:49.962996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:22:05.164 [2024-09-30 22:03:49.963007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.164 [2024-09-30 22:03:49.967033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.164 [2024-09-30 22:03:49.967062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.164 [2024-09-30 22:03:49.967072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.012 ms 00:22:05.164 [2024-09-30 22:03:49.967080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.164 [2024-09-30 22:03:49.973514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.164 [2024-09-30 22:03:49.973544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:05.164 [2024-09-30 22:03:49.973554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.417 ms 00:22:05.164 [2024-09-30 22:03:49.973561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.164 [2024-09-30 22:03:49.975008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.164 [2024-09-30 22:03:49.975042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.164 [2024-09-30 22:03:49.975050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:22:05.164 [2024-09-30 22:03:49.975057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.423 [2024-09-30 22:03:49.978107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.423 [2024-09-30 22:03:49.978152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:05.423 [2024-09-30 22:03:49.978161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:22:05.423 [2024-09-30 22:03:49.978168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.423 [2024-09-30 22:03:49.979676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.423 [2024-09-30 22:03:49.979704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:05.423 [2024-09-30 22:03:49.979713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.475 ms 00:22:05.423 [2024-09-30 22:03:49.979726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.423 [2024-09-30 22:03:49.981539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.423 [2024-09-30 22:03:49.981569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:05.423 [2024-09-30 22:03:49.981578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.799 ms 00:22:05.423 [2024-09-30 22:03:49.981585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.423 [2024-09-30 22:03:49.982757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.423 [2024-09-30 22:03:49.982785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:05.423 [2024-09-30 22:03:49.982793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.155 ms 00:22:05.423 [2024-09-30 22:03:49.982799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.423 [2024-09-30 22:03:49.983734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.423 [2024-09-30 22:03:49.983773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:05.423 [2024-09-30 22:03:49.983781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:22:05.423 [2024-09-30 22:03:49.983788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.423 [2024-09-30 22:03:49.984626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.423 [2024-09-30 22:03:49.984760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:05.423 [2024-09-30 22:03:49.984776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:22:05.423 [2024-09-30 22:03:49.984784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.423 [2024-09-30 22:03:49.984802] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:05.423 [2024-09-30 22:03:49.984816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:05.423 [2024-09-30 22:03:49.984827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:22:05.423 [2024-09-30 22:03:49.984837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:05.423 [2024-09-30 22:03:49.984997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:05.424 [2024-09-30 22:03:49.985600] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:05.424 [2024-09-30 22:03:49.985608] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5a2be95a-d5b1-4b79-bff0-a31df33a7add 00:22:05.424 [2024-09-30 22:03:49.985616] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:22:05.424 [2024-09-30 22:03:49.985623] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:05.424 [2024-09-30 22:03:49.985630] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:05.424 [2024-09-30 22:03:49.985638] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:05.425 [2024-09-30 22:03:49.985645] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:05.425 [2024-09-30 22:03:49.985657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:05.425 [2024-09-30 22:03:49.985664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:05.425 [2024-09-30 22:03:49.985671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:05.425 [2024-09-30 22:03:49.985677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:05.425 [2024-09-30 22:03:49.985684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.425 [2024-09-30 22:03:49.985695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:05.425 [2024-09-30 22:03:49.985703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:22:05.425 [2024-09-30 22:03:49.985710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:49.987130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.425 [2024-09-30 22:03:49.987150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:05.425 [2024-09-30 22:03:49.987159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:22:05.425 [2024-09-30 22:03:49.987172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:49.987265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.425 [2024-09-30 22:03:49.987278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:05.425 [2024-09-30 22:03:49.987286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:05.425 [2024-09-30 22:03:49.987293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:49.991844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:49.991948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.425 [2024-09-30 22:03:49.992075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:49.992131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:49.992209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:49.992267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.425 [2024-09-30 22:03:49.992313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:49.992334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:49.992389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:49.992451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.425 [2024-09-30 22:03:49.992474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:49.992493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:49.992529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:49.992655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.425 [2024-09-30 22:03:49.992690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:49.992741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.001551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:50.001681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.425 [2024-09-30 22:03:50.001748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:50.001785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.009408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:50.009532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.425 [2024-09-30 22:03:50.009582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:50.009604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.009656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:50.009798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.425 [2024-09-30 22:03:50.009814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:50.009822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.009849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:50.009867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.425 [2024-09-30 22:03:50.009875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:50.009883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.009948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:50.009958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.425 [2024-09-30 22:03:50.009965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:50.009976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.010007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:50.010019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:05.425 [2024-09-30 22:03:50.010027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:50.010034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.010064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:50.010072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.425 [2024-09-30 22:03:50.010080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:50.010087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.010126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.425 [2024-09-30 22:03:50.010137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.425 [2024-09-30 22:03:50.010148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.425 [2024-09-30 22:03:50.010155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.425 [2024-09-30 22:03:50.010284] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.123 ms, result 0 00:22:05.425 00:22:05.425 00:22:05.425 22:03:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:07.956 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:07.956 Process with pid 88254 is not found 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 88254 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 88254 ']' 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 88254 00:22:07.956 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (88254) - No such process 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 88254 is not found' 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:22:07.956 Remove shared memory files 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:22:07.956 22:03:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:22:08.216 22:03:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:22:08.216 22:03:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:08.216 22:03:52 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:22:08.216 ************************************ 00:22:08.216 END TEST ftl_dirty_shutdown 00:22:08.216 ************************************ 00:22:08.216 00:22:08.216 real 2m7.806s 00:22:08.216 user 2m22.833s 00:22:08.216 sys 0m21.881s 00:22:08.216 22:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:08.216 22:03:52 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:08.216 22:03:52 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:08.216 22:03:52 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:08.216 22:03:52 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:08.216 22:03:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:08.216 ************************************ 00:22:08.216 START TEST ftl_upgrade_shutdown 00:22:08.216 ************************************ 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:08.216 * Looking for test storage... 00:22:08.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.216 --rc genhtml_branch_coverage=1 00:22:08.216 --rc genhtml_function_coverage=1 00:22:08.216 --rc genhtml_legend=1 00:22:08.216 --rc geninfo_all_blocks=1 00:22:08.216 --rc geninfo_unexecuted_blocks=1 00:22:08.216 00:22:08.216 ' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.216 --rc genhtml_branch_coverage=1 00:22:08.216 --rc genhtml_function_coverage=1 00:22:08.216 --rc genhtml_legend=1 00:22:08.216 --rc geninfo_all_blocks=1 00:22:08.216 --rc geninfo_unexecuted_blocks=1 00:22:08.216 00:22:08.216 ' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.216 --rc genhtml_branch_coverage=1 00:22:08.216 --rc genhtml_function_coverage=1 00:22:08.216 --rc genhtml_legend=1 00:22:08.216 --rc geninfo_all_blocks=1 00:22:08.216 --rc geninfo_unexecuted_blocks=1 00:22:08.216 00:22:08.216 ' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:08.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.216 --rc genhtml_branch_coverage=1 00:22:08.216 --rc genhtml_function_coverage=1 00:22:08.216 --rc genhtml_legend=1 00:22:08.216 --rc geninfo_all_blocks=1 00:22:08.216 --rc geninfo_unexecuted_blocks=1 00:22:08.216 00:22:08.216 ' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:22:08.216 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=89727 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 89727 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89727 ']' 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.217 22:03:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:08.475 [2024-09-30 22:03:53.038063] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:08.475 [2024-09-30 22:03:53.038349] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89727 ] 00:22:08.475 [2024-09-30 22:03:53.176597] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:08.475 [2024-09-30 22:03:53.197281] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.475 [2024-09-30 22:03:53.231589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:09.409 22:03:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:22:09.409 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:22:09.409 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:09.409 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:22:09.409 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:22:09.409 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:09.409 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:09.409 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:09.409 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:22:09.667 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:09.667 { 00:22:09.667 "name": "basen1", 00:22:09.667 "aliases": [ 00:22:09.667 "8114aafd-6371-4bc9-b47d-1231103cc94c" 00:22:09.667 ], 00:22:09.667 "product_name": "NVMe disk", 00:22:09.667 "block_size": 4096, 00:22:09.667 "num_blocks": 1310720, 00:22:09.667 "uuid": "8114aafd-6371-4bc9-b47d-1231103cc94c", 00:22:09.667 "numa_id": -1, 00:22:09.667 "assigned_rate_limits": { 00:22:09.667 "rw_ios_per_sec": 0, 00:22:09.667 "rw_mbytes_per_sec": 0, 00:22:09.667 "r_mbytes_per_sec": 0, 00:22:09.667 "w_mbytes_per_sec": 0 00:22:09.667 }, 00:22:09.667 "claimed": true, 00:22:09.667 "claim_type": "read_many_write_one", 00:22:09.667 "zoned": false, 00:22:09.667 "supported_io_types": { 00:22:09.667 "read": true, 00:22:09.667 "write": true, 00:22:09.667 "unmap": true, 00:22:09.667 "flush": true, 00:22:09.667 "reset": true, 00:22:09.667 "nvme_admin": true, 00:22:09.667 "nvme_io": true, 00:22:09.667 "nvme_io_md": false, 00:22:09.667 "write_zeroes": true, 00:22:09.667 "zcopy": false, 00:22:09.667 "get_zone_info": false, 00:22:09.667 "zone_management": false, 00:22:09.667 "zone_append": false, 00:22:09.667 "compare": true, 00:22:09.667 "compare_and_write": false, 00:22:09.668 "abort": true, 00:22:09.668 "seek_hole": false, 00:22:09.668 "seek_data": false, 00:22:09.668 "copy": true, 00:22:09.668 "nvme_iov_md": false 00:22:09.668 }, 00:22:09.668 "driver_specific": { 00:22:09.668 "nvme": [ 00:22:09.668 { 00:22:09.668 "pci_address": "0000:00:11.0", 00:22:09.668 "trid": { 00:22:09.668 "trtype": "PCIe", 00:22:09.668 "traddr": "0000:00:11.0" 00:22:09.668 }, 00:22:09.668 "ctrlr_data": { 00:22:09.668 "cntlid": 0, 00:22:09.668 "vendor_id": "0x1b36", 00:22:09.668 "model_number": "QEMU NVMe Ctrl", 00:22:09.668 "serial_number": "12341", 00:22:09.668 "firmware_revision": "8.0.0", 00:22:09.668 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:09.668 "oacs": { 00:22:09.668 "security": 0, 00:22:09.668 "format": 1, 00:22:09.668 "firmware": 0, 00:22:09.668 "ns_manage": 1 00:22:09.668 }, 00:22:09.668 "multi_ctrlr": false, 00:22:09.668 "ana_reporting": false 00:22:09.668 }, 00:22:09.668 "vs": { 00:22:09.668 "nvme_version": "1.4" 00:22:09.668 }, 00:22:09.668 "ns_data": { 00:22:09.668 "id": 1, 00:22:09.668 "can_share": false 00:22:09.668 } 00:22:09.668 } 00:22:09.668 ], 00:22:09.668 "mp_policy": "active_passive" 00:22:09.668 } 00:22:09.668 } 00:22:09.668 ]' 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:09.668 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:09.926 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=9bc1ca00-9ed4-4686-ae84-98a7e6d013d5 00:22:09.926 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:09.926 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9bc1ca00-9ed4-4686-ae84-98a7e6d013d5 00:22:10.183 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:22:10.183 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=16d9a5d5-2524-4766-bb35-b558f0c45b3e 00:22:10.183 22:03:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 16d9a5d5-2524-4766-bb35-b558f0c45b3e 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=b1a46467-28e5-4697-81c3-42624f58493e 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z b1a46467-28e5-4697-81c3-42624f58493e ]] 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 b1a46467-28e5-4697-81c3-42624f58493e 5120 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=b1a46467-28e5-4697-81c3-42624f58493e 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size b1a46467-28e5-4697-81c3-42624f58493e 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=b1a46467-28e5-4697-81c3-42624f58493e 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:10.441 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1a46467-28e5-4697-81c3-42624f58493e 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:10.700 { 00:22:10.700 "name": "b1a46467-28e5-4697-81c3-42624f58493e", 00:22:10.700 "aliases": [ 00:22:10.700 "lvs/basen1p0" 00:22:10.700 ], 00:22:10.700 "product_name": "Logical Volume", 00:22:10.700 "block_size": 4096, 00:22:10.700 "num_blocks": 5242880, 00:22:10.700 "uuid": "b1a46467-28e5-4697-81c3-42624f58493e", 00:22:10.700 "assigned_rate_limits": { 00:22:10.700 "rw_ios_per_sec": 0, 00:22:10.700 "rw_mbytes_per_sec": 0, 00:22:10.700 "r_mbytes_per_sec": 0, 00:22:10.700 "w_mbytes_per_sec": 0 00:22:10.700 }, 00:22:10.700 "claimed": false, 00:22:10.700 "zoned": false, 00:22:10.700 "supported_io_types": { 00:22:10.700 "read": true, 00:22:10.700 "write": true, 00:22:10.700 "unmap": true, 00:22:10.700 "flush": false, 00:22:10.700 "reset": true, 00:22:10.700 "nvme_admin": false, 00:22:10.700 "nvme_io": false, 00:22:10.700 "nvme_io_md": false, 00:22:10.700 "write_zeroes": true, 00:22:10.700 "zcopy": false, 00:22:10.700 "get_zone_info": false, 00:22:10.700 "zone_management": false, 00:22:10.700 "zone_append": false, 00:22:10.700 "compare": false, 00:22:10.700 "compare_and_write": false, 00:22:10.700 "abort": false, 00:22:10.700 "seek_hole": true, 00:22:10.700 "seek_data": true, 00:22:10.700 "copy": false, 00:22:10.700 "nvme_iov_md": false 00:22:10.700 }, 00:22:10.700 "driver_specific": { 00:22:10.700 "lvol": { 00:22:10.700 "lvol_store_uuid": "16d9a5d5-2524-4766-bb35-b558f0c45b3e", 00:22:10.700 "base_bdev": "basen1", 00:22:10.700 "thin_provision": true, 00:22:10.700 "num_allocated_clusters": 0, 00:22:10.700 "snapshot": false, 00:22:10.700 "clone": false, 00:22:10.700 "esnap_clone": false 00:22:10.700 } 00:22:10.700 } 00:22:10.700 } 00:22:10.700 ]' 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:10.700 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:22:10.958 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:22:10.958 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:22:10.958 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:22:11.218 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:22:11.218 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:22:11.218 22:03:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d b1a46467-28e5-4697-81c3-42624f58493e -c cachen1p0 --l2p_dram_limit 2 00:22:11.218 [2024-09-30 22:03:56.014595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.014636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:11.218 [2024-09-30 22:03:56.014648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:11.218 [2024-09-30 22:03:56.014655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.014699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.014706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:11.218 [2024-09-30 22:03:56.014716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:22:11.218 [2024-09-30 22:03:56.014723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.014739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:11.218 [2024-09-30 22:03:56.014942] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:11.218 [2024-09-30 22:03:56.014955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.014963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:11.218 [2024-09-30 22:03:56.014971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.219 ms 00:22:11.218 [2024-09-30 22:03:56.014977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.015002] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 04fea156-4f1d-44df-ae9c-f9b8d05603b4 00:22:11.218 [2024-09-30 22:03:56.016052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.016082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:22:11.218 [2024-09-30 22:03:56.016093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:22:11.218 [2024-09-30 22:03:56.016105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.021349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.021378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:11.218 [2024-09-30 22:03:56.021386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.183 ms 00:22:11.218 [2024-09-30 22:03:56.021395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.021428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.021436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:11.218 [2024-09-30 22:03:56.021444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:22:11.218 [2024-09-30 22:03:56.021454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.021483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.021492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:11.218 [2024-09-30 22:03:56.021498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:11.218 [2024-09-30 22:03:56.021505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.021524] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:11.218 [2024-09-30 22:03:56.022835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.022960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:11.218 [2024-09-30 22:03:56.022975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.314 ms 00:22:11.218 [2024-09-30 22:03:56.022981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.023005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.023011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:11.218 [2024-09-30 22:03:56.023021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:11.218 [2024-09-30 22:03:56.023030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.023044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:22:11.218 [2024-09-30 22:03:56.023153] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:11.218 [2024-09-30 22:03:56.023164] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:11.218 [2024-09-30 22:03:56.023172] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:11.218 [2024-09-30 22:03:56.023181] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:11.218 [2024-09-30 22:03:56.023211] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:11.218 [2024-09-30 22:03:56.023221] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:11.218 [2024-09-30 22:03:56.023227] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:11.218 [2024-09-30 22:03:56.023234] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:11.218 [2024-09-30 22:03:56.023241] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:11.218 [2024-09-30 22:03:56.023256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.023262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:11.218 [2024-09-30 22:03:56.023269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:22:11.218 [2024-09-30 22:03:56.023276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.023344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.218 [2024-09-30 22:03:56.023350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:11.218 [2024-09-30 22:03:56.023357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:22:11.218 [2024-09-30 22:03:56.023362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.218 [2024-09-30 22:03:56.023437] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:11.218 [2024-09-30 22:03:56.023444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:11.218 [2024-09-30 22:03:56.023451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:11.218 [2024-09-30 22:03:56.023457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.218 [2024-09-30 22:03:56.023464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:11.218 [2024-09-30 22:03:56.023469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:11.218 [2024-09-30 22:03:56.023478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:11.218 [2024-09-30 22:03:56.023483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:11.218 [2024-09-30 22:03:56.023489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:11.218 [2024-09-30 22:03:56.023494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.218 [2024-09-30 22:03:56.023500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:11.218 [2024-09-30 22:03:56.023505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:11.218 [2024-09-30 22:03:56.023514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.218 [2024-09-30 22:03:56.023519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:11.218 [2024-09-30 22:03:56.023525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:11.218 [2024-09-30 22:03:56.023531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.218 [2024-09-30 22:03:56.023538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:11.218 [2024-09-30 22:03:56.023545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:11.218 [2024-09-30 22:03:56.023552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.218 [2024-09-30 22:03:56.023558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:11.218 [2024-09-30 22:03:56.023565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:11.218 [2024-09-30 22:03:56.023571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:11.218 [2024-09-30 22:03:56.023578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:11.218 [2024-09-30 22:03:56.023584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:11.218 [2024-09-30 22:03:56.023591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:11.218 [2024-09-30 22:03:56.023597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:11.218 [2024-09-30 22:03:56.023604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:11.218 [2024-09-30 22:03:56.023609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:11.219 [2024-09-30 22:03:56.023618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:11.219 [2024-09-30 22:03:56.023624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:11.219 [2024-09-30 22:03:56.023631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:11.219 [2024-09-30 22:03:56.023637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:11.219 [2024-09-30 22:03:56.023645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:11.219 [2024-09-30 22:03:56.023651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.219 [2024-09-30 22:03:56.023659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:11.219 [2024-09-30 22:03:56.023664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:11.219 [2024-09-30 22:03:56.023671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.219 [2024-09-30 22:03:56.023677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:11.219 [2024-09-30 22:03:56.023684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:11.219 [2024-09-30 22:03:56.023689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.219 [2024-09-30 22:03:56.023696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:11.219 [2024-09-30 22:03:56.023702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:11.219 [2024-09-30 22:03:56.023709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.219 [2024-09-30 22:03:56.023714] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:11.219 [2024-09-30 22:03:56.023723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:11.219 [2024-09-30 22:03:56.023729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:11.219 [2024-09-30 22:03:56.023737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:11.219 [2024-09-30 22:03:56.023743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:11.219 [2024-09-30 22:03:56.023751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:11.219 [2024-09-30 22:03:56.023757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:11.219 [2024-09-30 22:03:56.023765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:11.219 [2024-09-30 22:03:56.023771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:11.219 [2024-09-30 22:03:56.023778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:11.219 [2024-09-30 22:03:56.023787] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:11.219 [2024-09-30 22:03:56.023796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:11.219 [2024-09-30 22:03:56.023814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:11.219 [2024-09-30 22:03:56.023834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:11.219 [2024-09-30 22:03:56.023842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:11.219 [2024-09-30 22:03:56.023848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:11.219 [2024-09-30 22:03:56.023855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:11.219 [2024-09-30 22:03:56.023902] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:11.219 [2024-09-30 22:03:56.023910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:11.219 [2024-09-30 22:03:56.023924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:11.219 [2024-09-30 22:03:56.023929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:11.219 [2024-09-30 22:03:56.023935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:11.219 [2024-09-30 22:03:56.023941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:11.219 [2024-09-30 22:03:56.023949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:11.219 [2024-09-30 22:03:56.023954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.556 ms 00:22:11.219 [2024-09-30 22:03:56.023961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:11.219 [2024-09-30 22:03:56.024002] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:11.219 [2024-09-30 22:03:56.024012] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:13.748 [2024-09-30 22:03:58.401670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.748 [2024-09-30 22:03:58.401737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:13.748 [2024-09-30 22:03:58.401754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2377.660 ms 00:22:13.748 [2024-09-30 22:03:58.401764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.748 [2024-09-30 22:03:58.410083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.748 [2024-09-30 22:03:58.410127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:13.748 [2024-09-30 22:03:58.410144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.243 ms 00:22:13.748 [2024-09-30 22:03:58.410155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.748 [2024-09-30 22:03:58.410242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.748 [2024-09-30 22:03:58.410257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:13.749 [2024-09-30 22:03:58.410267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:22:13.749 [2024-09-30 22:03:58.410276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.418351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.418388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:13.749 [2024-09-30 22:03:58.418402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.040 ms 00:22:13.749 [2024-09-30 22:03:58.418412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.418439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.418449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:13.749 [2024-09-30 22:03:58.418462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:13.749 [2024-09-30 22:03:58.418471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.418795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.418812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:13.749 [2024-09-30 22:03:58.418821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:22:13.749 [2024-09-30 22:03:58.418832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.418868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.418877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:13.749 [2024-09-30 22:03:58.418887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:22:13.749 [2024-09-30 22:03:58.418896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.434752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.434821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:13.749 [2024-09-30 22:03:58.434841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.835 ms 00:22:13.749 [2024-09-30 22:03:58.434858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.447058] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:13.749 [2024-09-30 22:03:58.447913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.447942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:13.749 [2024-09-30 22:03:58.447957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.911 ms 00:22:13.749 [2024-09-30 22:03:58.447972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.460515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.460551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:22:13.749 [2024-09-30 22:03:58.460565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.516 ms 00:22:13.749 [2024-09-30 22:03:58.460575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.460640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.460650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:13.749 [2024-09-30 22:03:58.460660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:22:13.749 [2024-09-30 22:03:58.460667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.463121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.463155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:22:13.749 [2024-09-30 22:03:58.463168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.421 ms 00:22:13.749 [2024-09-30 22:03:58.463176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.465946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.465978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:22:13.749 [2024-09-30 22:03:58.465988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.732 ms 00:22:13.749 [2024-09-30 22:03:58.465995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.466278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.466287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:13.749 [2024-09-30 22:03:58.466299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:22:13.749 [2024-09-30 22:03:58.466306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.494893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.494929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:22:13.749 [2024-09-30 22:03:58.494942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.565 ms 00:22:13.749 [2024-09-30 22:03:58.494952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.498949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.498984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:22:13.749 [2024-09-30 22:03:58.498997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.939 ms 00:22:13.749 [2024-09-30 22:03:58.499004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.502296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.502329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:22:13.749 [2024-09-30 22:03:58.502340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.256 ms 00:22:13.749 [2024-09-30 22:03:58.502348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.505483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.505621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:13.749 [2024-09-30 22:03:58.505645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.098 ms 00:22:13.749 [2024-09-30 22:03:58.505652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.505688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.505698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:13.749 [2024-09-30 22:03:58.505708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:13.749 [2024-09-30 22:03:58.505715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.505776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:13.749 [2024-09-30 22:03:58.505785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:13.749 [2024-09-30 22:03:58.505794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:22:13.749 [2024-09-30 22:03:58.505801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:13.749 [2024-09-30 22:03:58.506630] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2491.659 ms, result 0 00:22:13.749 { 00:22:13.749 "name": "ftl", 00:22:13.749 "uuid": "04fea156-4f1d-44df-ae9c-f9b8d05603b4" 00:22:13.749 } 00:22:13.749 22:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:22:14.006 [2024-09-30 22:03:58.695482] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.006 22:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:22:14.263 22:03:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:22:14.263 [2024-09-30 22:03:59.071880] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:14.521 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:22:14.521 [2024-09-30 22:03:59.264232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:14.521 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:15.089 Fill FTL, iteration 1 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=89837 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 89837 /var/tmp/spdk.tgt.sock 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 89837 ']' 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:22:15.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:15.089 22:03:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:15.089 [2024-09-30 22:03:59.683829] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:15.089 [2024-09-30 22:03:59.684077] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89837 ] 00:22:15.089 [2024-09-30 22:03:59.812304] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:15.090 [2024-09-30 22:03:59.832746] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.090 [2024-09-30 22:03:59.866762] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.021 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:16.021 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:16.021 22:04:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:22:16.021 ftln1 00:22:16.021 22:04:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:22:16.021 22:04:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 89837 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89837 ']' 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 89837 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89837 00:22:16.280 killing process with pid 89837 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89837' 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 89837 00:22:16.280 22:04:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 89837 00:22:16.538 22:04:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:22:16.538 22:04:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:16.538 [2024-09-30 22:04:01.333959] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:16.538 [2024-09-30 22:04:01.334072] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89864 ] 00:22:16.795 [2024-09-30 22:04:01.461321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.796 [2024-09-30 22:04:01.481152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.796 [2024-09-30 22:04:01.514821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.800  Copying: 207/1024 [MB] (207 MBps) Copying: 466/1024 [MB] (259 MBps) Copying: 727/1024 [MB] (261 MBps) Copying: 923/1024 [MB] (196 MBps) Copying: 1024/1024 [MB] (average 225 MBps) 00:22:21.800 00:22:21.800 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:22:21.800 Calculate MD5 checksum, iteration 1 00:22:21.800 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:22:21.800 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:21.800 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:21.800 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:21.800 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:21.800 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:21.800 22:04:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:21.800 [2024-09-30 22:04:06.455443] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:21.800 [2024-09-30 22:04:06.455559] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89921 ] 00:22:21.800 [2024-09-30 22:04:06.583116] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:21.800 [2024-09-30 22:04:06.601773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.058 [2024-09-30 22:04:06.633287] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.824  Copying: 666/1024 [MB] (666 MBps) Copying: 1024/1024 [MB] (average 669 MBps) 00:22:23.824 00:22:23.824 22:04:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:22:23.824 22:04:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:25.730 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:25.730 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=4208b490fbfea3a3780032d2a6f2243a 00:22:25.730 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:25.730 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:25.731 Fill FTL, iteration 2 00:22:25.731 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:22:25.731 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:25.731 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:25.731 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:25.731 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:25.731 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:25.731 22:04:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:25.989 [2024-09-30 22:04:10.556796] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:25.989 [2024-09-30 22:04:10.556880] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89967 ] 00:22:25.989 [2024-09-30 22:04:10.679331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:25.989 [2024-09-30 22:04:10.699320] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.989 [2024-09-30 22:04:10.733146] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.729  Copying: 216/1024 [MB] (216 MBps) Copying: 438/1024 [MB] (222 MBps) Copying: 704/1024 [MB] (266 MBps) Copying: 972/1024 [MB] (268 MBps) Copying: 1024/1024 [MB] (average 243 MBps) 00:22:30.729 00:22:30.729 22:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:22:30.729 22:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:22:30.729 Calculate MD5 checksum, iteration 2 00:22:30.729 22:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:30.729 22:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:30.729 22:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:30.729 22:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:30.729 22:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:30.729 22:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:30.729 [2024-09-30 22:04:15.362582] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:30.729 [2024-09-30 22:04:15.362692] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90021 ] 00:22:30.729 [2024-09-30 22:04:15.490229] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:30.729 [2024-09-30 22:04:15.509751] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.729 [2024-09-30 22:04:15.541081] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.238  Copying: 715/1024 [MB] (715 MBps) Copying: 1024/1024 [MB] (average 692 MBps) 00:22:33.238 00:22:33.238 22:04:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:22:33.238 22:04:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:35.767 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:35.767 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=b27ff9e974356efa389076b9c9d7ee1c 00:22:35.767 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:35.767 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:35.767 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:35.767 [2024-09-30 22:04:20.204494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.767 [2024-09-30 22:04:20.204537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:35.767 [2024-09-30 22:04:20.204548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:35.767 [2024-09-30 22:04:20.204555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.767 [2024-09-30 22:04:20.204578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.767 [2024-09-30 22:04:20.204587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:35.767 [2024-09-30 22:04:20.204594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:35.767 [2024-09-30 22:04:20.204602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.767 [2024-09-30 22:04:20.204617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:35.767 [2024-09-30 22:04:20.204624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:35.767 [2024-09-30 22:04:20.204631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:35.767 [2024-09-30 22:04:20.204639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:35.767 [2024-09-30 22:04:20.204687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.182 ms, result 0 00:22:35.767 true 00:22:35.767 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:35.767 { 00:22:35.767 "name": "ftl", 00:22:35.767 "properties": [ 00:22:35.767 { 00:22:35.767 "name": "superblock_version", 00:22:35.767 "value": 5, 00:22:35.767 "read-only": true 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "name": "base_device", 00:22:35.767 "bands": [ 00:22:35.767 { 00:22:35.767 "id": 0, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 1, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 2, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 3, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 4, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 5, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 6, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 7, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 8, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 9, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 10, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 11, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 12, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 13, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 14, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 15, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 16, 00:22:35.767 "state": "FREE", 00:22:35.767 "validity": 0.0 00:22:35.767 }, 00:22:35.767 { 00:22:35.767 "id": 17, 00:22:35.767 "state": "FREE", 00:22:35.768 "validity": 0.0 00:22:35.768 } 00:22:35.768 ], 00:22:35.768 "read-only": true 00:22:35.768 }, 00:22:35.768 { 00:22:35.768 "name": "cache_device", 00:22:35.768 "type": "bdev", 00:22:35.768 "chunks": [ 00:22:35.768 { 00:22:35.768 "id": 0, 00:22:35.768 "state": "INACTIVE", 00:22:35.768 "utilization": 0.0 00:22:35.768 }, 00:22:35.768 { 00:22:35.768 "id": 1, 00:22:35.768 "state": "CLOSED", 00:22:35.768 "utilization": 1.0 00:22:35.768 }, 00:22:35.768 { 00:22:35.768 "id": 2, 00:22:35.768 "state": "CLOSED", 00:22:35.768 "utilization": 1.0 00:22:35.768 }, 00:22:35.768 { 00:22:35.768 "id": 3, 00:22:35.768 "state": "OPEN", 00:22:35.768 "utilization": 0.001953125 00:22:35.768 }, 00:22:35.768 { 00:22:35.768 "id": 4, 00:22:35.768 "state": "OPEN", 00:22:35.768 "utilization": 0.0 00:22:35.768 } 00:22:35.768 ], 00:22:35.768 "read-only": true 00:22:35.768 }, 00:22:35.768 { 00:22:35.768 "name": "verbose_mode", 00:22:35.768 "value": true, 00:22:35.768 "unit": "", 00:22:35.768 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:35.768 }, 00:22:35.768 { 00:22:35.768 "name": "prep_upgrade_on_shutdown", 00:22:35.768 "value": false, 00:22:35.768 "unit": "", 00:22:35.768 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:35.768 } 00:22:35.768 ] 00:22:35.768 } 00:22:35.768 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:22:36.026 [2024-09-30 22:04:20.596791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.026 [2024-09-30 22:04:20.596824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:36.026 [2024-09-30 22:04:20.596833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:36.026 [2024-09-30 22:04:20.596838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.026 [2024-09-30 22:04:20.596855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.026 [2024-09-30 22:04:20.596861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:36.026 [2024-09-30 22:04:20.596867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:36.026 [2024-09-30 22:04:20.596872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.026 [2024-09-30 22:04:20.596887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.026 [2024-09-30 22:04:20.596893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:36.026 [2024-09-30 22:04:20.596899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:36.026 [2024-09-30 22:04:20.596905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.026 [2024-09-30 22:04:20.596945] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.148 ms, result 0 00:22:36.026 true 00:22:36.026 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:22:36.026 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:36.026 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:36.026 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:22:36.026 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:22:36.026 22:04:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:36.284 [2024-09-30 22:04:21.005124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.284 [2024-09-30 22:04:21.005160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:36.284 [2024-09-30 22:04:21.005169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:36.284 [2024-09-30 22:04:21.005175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.284 [2024-09-30 22:04:21.005201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.284 [2024-09-30 22:04:21.005207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:36.284 [2024-09-30 22:04:21.005214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:36.284 [2024-09-30 22:04:21.005220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.284 [2024-09-30 22:04:21.005235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.284 [2024-09-30 22:04:21.005241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:36.284 [2024-09-30 22:04:21.005247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:36.284 [2024-09-30 22:04:21.005252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.284 [2024-09-30 22:04:21.005295] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.161 ms, result 0 00:22:36.284 true 00:22:36.284 22:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:36.542 { 00:22:36.542 "name": "ftl", 00:22:36.542 "properties": [ 00:22:36.542 { 00:22:36.542 "name": "superblock_version", 00:22:36.542 "value": 5, 00:22:36.542 "read-only": true 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "name": "base_device", 00:22:36.542 "bands": [ 00:22:36.542 { 00:22:36.542 "id": 0, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 1, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 2, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 3, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 4, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 5, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 6, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 7, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 8, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 9, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 10, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 11, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 12, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 13, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 14, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 15, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 16, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 17, 00:22:36.542 "state": "FREE", 00:22:36.542 "validity": 0.0 00:22:36.542 } 00:22:36.542 ], 00:22:36.542 "read-only": true 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "name": "cache_device", 00:22:36.542 "type": "bdev", 00:22:36.542 "chunks": [ 00:22:36.542 { 00:22:36.542 "id": 0, 00:22:36.542 "state": "INACTIVE", 00:22:36.542 "utilization": 0.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 1, 00:22:36.542 "state": "CLOSED", 00:22:36.542 "utilization": 1.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 2, 00:22:36.542 "state": "CLOSED", 00:22:36.542 "utilization": 1.0 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 3, 00:22:36.542 "state": "OPEN", 00:22:36.542 "utilization": 0.001953125 00:22:36.542 }, 00:22:36.542 { 00:22:36.542 "id": 4, 00:22:36.542 "state": "OPEN", 00:22:36.543 "utilization": 0.0 00:22:36.543 } 00:22:36.543 ], 00:22:36.543 "read-only": true 00:22:36.543 }, 00:22:36.543 { 00:22:36.543 "name": "verbose_mode", 00:22:36.543 "value": true, 00:22:36.543 "unit": "", 00:22:36.543 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:36.543 }, 00:22:36.543 { 00:22:36.543 "name": "prep_upgrade_on_shutdown", 00:22:36.543 "value": true, 00:22:36.543 "unit": "", 00:22:36.543 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:36.543 } 00:22:36.543 ] 00:22:36.543 } 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 89727 ]] 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 89727 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 89727 ']' 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 89727 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89727 00:22:36.543 killing process with pid 89727 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89727' 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 89727 00:22:36.543 22:04:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 89727 00:22:36.543 [2024-09-30 22:04:21.304400] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:22:36.543 [2024-09-30 22:04:21.310473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.543 [2024-09-30 22:04:21.310510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:36.543 [2024-09-30 22:04:21.310519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:36.543 [2024-09-30 22:04:21.310526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:36.543 [2024-09-30 22:04:21.310548] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:36.543 [2024-09-30 22:04:21.310946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:36.543 [2024-09-30 22:04:21.310973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:36.543 [2024-09-30 22:04:21.310981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.389 ms 00:22:36.543 [2024-09-30 22:04:21.310990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.505963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.506027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:44.654 [2024-09-30 22:04:28.506042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7194.936 ms 00:22:44.654 [2024-09-30 22:04:28.506050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.507103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.507122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:44.654 [2024-09-30 22:04:28.507130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.040 ms 00:22:44.654 [2024-09-30 22:04:28.507135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.508011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.508033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:22:44.654 [2024-09-30 22:04:28.508045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.857 ms 00:22:44.654 [2024-09-30 22:04:28.508051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.509346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.509375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:44.654 [2024-09-30 22:04:28.509383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.268 ms 00:22:44.654 [2024-09-30 22:04:28.509388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.511426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.511456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:44.654 [2024-09-30 22:04:28.511463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.012 ms 00:22:44.654 [2024-09-30 22:04:28.511469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.511524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.511536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:44.654 [2024-09-30 22:04:28.511542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:22:44.654 [2024-09-30 22:04:28.511548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.512607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.512634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:22:44.654 [2024-09-30 22:04:28.512641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.047 ms 00:22:44.654 [2024-09-30 22:04:28.512646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.513769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.513797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:22:44.654 [2024-09-30 22:04:28.513804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.098 ms 00:22:44.654 [2024-09-30 22:04:28.513809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.514662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.514696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:44.654 [2024-09-30 22:04:28.514704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.829 ms 00:22:44.654 [2024-09-30 22:04:28.514709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.515567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.515594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:44.654 [2024-09-30 22:04:28.515601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.813 ms 00:22:44.654 [2024-09-30 22:04:28.515606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.515628] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:44.654 [2024-09-30 22:04:28.515639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:44.654 [2024-09-30 22:04:28.515646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:44.654 [2024-09-30 22:04:28.515652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:44.654 [2024-09-30 22:04:28.515659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:44.654 [2024-09-30 22:04:28.515746] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:44.654 [2024-09-30 22:04:28.515752] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 04fea156-4f1d-44df-ae9c-f9b8d05603b4 00:22:44.654 [2024-09-30 22:04:28.515758] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:44.654 [2024-09-30 22:04:28.515763] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:22:44.654 [2024-09-30 22:04:28.515768] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:22:44.654 [2024-09-30 22:04:28.515775] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:22:44.654 [2024-09-30 22:04:28.515785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:44.654 [2024-09-30 22:04:28.515790] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:44.654 [2024-09-30 22:04:28.515796] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:44.654 [2024-09-30 22:04:28.515801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:44.654 [2024-09-30 22:04:28.515806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:44.654 [2024-09-30 22:04:28.515811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.515817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:44.654 [2024-09-30 22:04:28.515824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:22:44.654 [2024-09-30 22:04:28.515831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.654 [2024-09-30 22:04:28.517174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.654 [2024-09-30 22:04:28.517206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:44.654 [2024-09-30 22:04:28.517217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.332 ms 00:22:44.655 [2024-09-30 22:04:28.517223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.517291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:44.655 [2024-09-30 22:04:28.517298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:44.655 [2024-09-30 22:04:28.517304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:22:44.655 [2024-09-30 22:04:28.517309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.522047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.522081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:44.655 [2024-09-30 22:04:28.522088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.522095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.522116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.522122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:44.655 [2024-09-30 22:04:28.522128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.522133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.522170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.522178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:44.655 [2024-09-30 22:04:28.522198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.522204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.522216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.522223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:44.655 [2024-09-30 22:04:28.522232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.522238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.530583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.530618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:44.655 [2024-09-30 22:04:28.530631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.530638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.537416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.537450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:44.655 [2024-09-30 22:04:28.537458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.537464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.537513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.537520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:44.655 [2024-09-30 22:04:28.537526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.537537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.537559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.537566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:44.655 [2024-09-30 22:04:28.537572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.537578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.537626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.537634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:44.655 [2024-09-30 22:04:28.537641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.537646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.537671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.537681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:44.655 [2024-09-30 22:04:28.537690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.537697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.537729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.537740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:44.655 [2024-09-30 22:04:28.537747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.537753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.537789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:44.655 [2024-09-30 22:04:28.537796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:44.655 [2024-09-30 22:04:28.537802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:44.655 [2024-09-30 22:04:28.537808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:44.655 [2024-09-30 22:04:28.537898] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7227.390 ms, result 0 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=90194 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 90194 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 90194 ']' 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:47.942 22:04:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:47.942 [2024-09-30 22:04:32.610364] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:47.942 [2024-09-30 22:04:32.610482] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90194 ] 00:22:47.942 [2024-09-30 22:04:32.738658] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:48.200 [2024-09-30 22:04:32.758044] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:48.200 [2024-09-30 22:04:32.791980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.459 [2024-09-30 22:04:33.059565] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:48.459 [2024-09-30 22:04:33.059629] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:22:48.459 [2024-09-30 22:04:33.202315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.202359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:48.459 [2024-09-30 22:04:33.202374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:48.459 [2024-09-30 22:04:33.202382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.459 [2024-09-30 22:04:33.202430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.202440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:48.459 [2024-09-30 22:04:33.202448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:22:48.459 [2024-09-30 22:04:33.202457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.459 [2024-09-30 22:04:33.202478] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:48.459 [2024-09-30 22:04:33.202783] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:48.459 [2024-09-30 22:04:33.202802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.202812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:48.459 [2024-09-30 22:04:33.202820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.331 ms 00:22:48.459 [2024-09-30 22:04:33.202828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.459 [2024-09-30 22:04:33.203853] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:22:48.459 [2024-09-30 22:04:33.206137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.206171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:22:48.459 [2024-09-30 22:04:33.206185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.285 ms 00:22:48.459 [2024-09-30 22:04:33.206204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.459 [2024-09-30 22:04:33.206258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.206268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:22:48.459 [2024-09-30 22:04:33.206276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:22:48.459 [2024-09-30 22:04:33.206283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.459 [2024-09-30 22:04:33.211226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.211261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:48.459 [2024-09-30 22:04:33.211270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.887 ms 00:22:48.459 [2024-09-30 22:04:33.211277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.459 [2024-09-30 22:04:33.211314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.211322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:48.459 [2024-09-30 22:04:33.211330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:22:48.459 [2024-09-30 22:04:33.211337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.459 [2024-09-30 22:04:33.211376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.211385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:48.459 [2024-09-30 22:04:33.211395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:48.459 [2024-09-30 22:04:33.211401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.459 [2024-09-30 22:04:33.211422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:48.459 [2024-09-30 22:04:33.212762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.459 [2024-09-30 22:04:33.212791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:48.459 [2024-09-30 22:04:33.212805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.346 ms 00:22:48.459 [2024-09-30 22:04:33.212813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.460 [2024-09-30 22:04:33.212839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.460 [2024-09-30 22:04:33.212851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:48.460 [2024-09-30 22:04:33.212861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:48.460 [2024-09-30 22:04:33.212868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.460 [2024-09-30 22:04:33.212891] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:22:48.460 [2024-09-30 22:04:33.212907] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:22:48.460 [2024-09-30 22:04:33.212941] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:22:48.460 [2024-09-30 22:04:33.212956] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:22:48.460 [2024-09-30 22:04:33.213057] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:48.460 [2024-09-30 22:04:33.213074] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:48.460 [2024-09-30 22:04:33.213086] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:48.460 [2024-09-30 22:04:33.213096] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213105] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213113] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:48.460 [2024-09-30 22:04:33.213120] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:48.460 [2024-09-30 22:04:33.213127] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:48.460 [2024-09-30 22:04:33.213133] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:48.460 [2024-09-30 22:04:33.213144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.460 [2024-09-30 22:04:33.213151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:48.460 [2024-09-30 22:04:33.213158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:22:48.460 [2024-09-30 22:04:33.213166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.460 [2024-09-30 22:04:33.213262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.460 [2024-09-30 22:04:33.213270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:48.460 [2024-09-30 22:04:33.213278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.080 ms 00:22:48.460 [2024-09-30 22:04:33.213285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.460 [2024-09-30 22:04:33.213387] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:48.460 [2024-09-30 22:04:33.213396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:48.460 [2024-09-30 22:04:33.213405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:48.460 [2024-09-30 22:04:33.213431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:48.460 [2024-09-30 22:04:33.213447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:48.460 [2024-09-30 22:04:33.213454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:48.460 [2024-09-30 22:04:33.213461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:48.460 [2024-09-30 22:04:33.213476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:48.460 [2024-09-30 22:04:33.213483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:48.460 [2024-09-30 22:04:33.213499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:48.460 [2024-09-30 22:04:33.213506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:48.460 [2024-09-30 22:04:33.213524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:48.460 [2024-09-30 22:04:33.213532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:48.460 [2024-09-30 22:04:33.213555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:48.460 [2024-09-30 22:04:33.213563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:48.460 [2024-09-30 22:04:33.213577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:48.460 [2024-09-30 22:04:33.213584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:48.460 [2024-09-30 22:04:33.213599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:48.460 [2024-09-30 22:04:33.213606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:48.460 [2024-09-30 22:04:33.213621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:48.460 [2024-09-30 22:04:33.213628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:48.460 [2024-09-30 22:04:33.213644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:48.460 [2024-09-30 22:04:33.213651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:48.460 [2024-09-30 22:04:33.213666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:48.460 [2024-09-30 22:04:33.213687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:48.460 [2024-09-30 22:04:33.213710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:48.460 [2024-09-30 22:04:33.213717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213725] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:48.460 [2024-09-30 22:04:33.213733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:48.460 [2024-09-30 22:04:33.213741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:48.460 [2024-09-30 22:04:33.213758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:48.460 [2024-09-30 22:04:33.213767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:48.460 [2024-09-30 22:04:33.213775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:48.460 [2024-09-30 22:04:33.213783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:48.460 [2024-09-30 22:04:33.213790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:48.460 [2024-09-30 22:04:33.213798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:48.460 [2024-09-30 22:04:33.213806] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:48.460 [2024-09-30 22:04:33.213815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:48.460 [2024-09-30 22:04:33.213830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:48.460 [2024-09-30 22:04:33.213851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:48.460 [2024-09-30 22:04:33.213858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:48.460 [2024-09-30 22:04:33.213865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:48.460 [2024-09-30 22:04:33.213871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:48.460 [2024-09-30 22:04:33.213921] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:48.460 [2024-09-30 22:04:33.213929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:48.460 [2024-09-30 22:04:33.213945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:48.460 [2024-09-30 22:04:33.213952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:48.461 [2024-09-30 22:04:33.213959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:48.461 [2024-09-30 22:04:33.213967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.461 [2024-09-30 22:04:33.213974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:48.461 [2024-09-30 22:04:33.213983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:22:48.461 [2024-09-30 22:04:33.213991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.461 [2024-09-30 22:04:33.214030] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:48.461 [2024-09-30 22:04:33.214041] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:50.993 [2024-09-30 22:04:35.498601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.993 [2024-09-30 22:04:35.498651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:50.993 [2024-09-30 22:04:35.498664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2284.564 ms 00:22:50.993 [2024-09-30 22:04:35.498672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.993 [2024-09-30 22:04:35.506831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.993 [2024-09-30 22:04:35.506872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:50.993 [2024-09-30 22:04:35.506883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.079 ms 00:22:50.993 [2024-09-30 22:04:35.506892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.993 [2024-09-30 22:04:35.506933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.993 [2024-09-30 22:04:35.506948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:50.993 [2024-09-30 22:04:35.506956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:22:50.993 [2024-09-30 22:04:35.506963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.993 [2024-09-30 22:04:35.525573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.993 [2024-09-30 22:04:35.525617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:50.993 [2024-09-30 22:04:35.525629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.563 ms 00:22:50.993 [2024-09-30 22:04:35.525637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.993 [2024-09-30 22:04:35.525674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.525683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:50.994 [2024-09-30 22:04:35.525691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:50.994 [2024-09-30 22:04:35.525698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.526043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.526070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:50.994 [2024-09-30 22:04:35.526080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:22:50.994 [2024-09-30 22:04:35.526087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.526126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.526134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:50.994 [2024-09-30 22:04:35.526149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:22:50.994 [2024-09-30 22:04:35.526157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.531521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.531554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:50.994 [2024-09-30 22:04:35.531563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.343 ms 00:22:50.994 [2024-09-30 22:04:35.531571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.533845] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:22:50.994 [2024-09-30 22:04:35.533888] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:22:50.994 [2024-09-30 22:04:35.533901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.533910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:22:50.994 [2024-09-30 22:04:35.533919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.250 ms 00:22:50.994 [2024-09-30 22:04:35.533928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.538350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.538390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:22:50.994 [2024-09-30 22:04:35.538401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.333 ms 00:22:50.994 [2024-09-30 22:04:35.538410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.539832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.539868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:22:50.994 [2024-09-30 22:04:35.539879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.380 ms 00:22:50.994 [2024-09-30 22:04:35.539889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.541342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.541374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:22:50.994 [2024-09-30 22:04:35.541384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.407 ms 00:22:50.994 [2024-09-30 22:04:35.541393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.541770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.541795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:50.994 [2024-09-30 22:04:35.541805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:22:50.994 [2024-09-30 22:04:35.541814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.556719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.556760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:22:50.994 [2024-09-30 22:04:35.556770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.885 ms 00:22:50.994 [2024-09-30 22:04:35.556778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.564016] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:50.994 [2024-09-30 22:04:35.564677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.564710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:50.994 [2024-09-30 22:04:35.564721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.862 ms 00:22:50.994 [2024-09-30 22:04:35.564729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.564773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.564782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:22:50.994 [2024-09-30 22:04:35.564791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:50.994 [2024-09-30 22:04:35.564798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.564850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.564859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:50.994 [2024-09-30 22:04:35.564867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:22:50.994 [2024-09-30 22:04:35.564877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.564897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.564905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:50.994 [2024-09-30 22:04:35.564913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:50.994 [2024-09-30 22:04:35.564921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.564952] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:22:50.994 [2024-09-30 22:04:35.564961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.564969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:22:50.994 [2024-09-30 22:04:35.564976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:22:50.994 [2024-09-30 22:04:35.564984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.567737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.567770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:50.994 [2024-09-30 22:04:35.567787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.733 ms 00:22:50.994 [2024-09-30 22:04:35.567799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.567863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:50.994 [2024-09-30 22:04:35.567873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:50.994 [2024-09-30 22:04:35.567881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:22:50.994 [2024-09-30 22:04:35.567888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:50.994 [2024-09-30 22:04:35.568786] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2366.107 ms, result 0 00:22:50.994 [2024-09-30 22:04:35.581120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.994 [2024-09-30 22:04:35.597113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:50.994 [2024-09-30 22:04:35.605223] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:50.994 22:04:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:50.994 22:04:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:50.994 22:04:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:50.994 22:04:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:22:50.994 22:04:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:51.256 [2024-09-30 22:04:35.829290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:51.256 [2024-09-30 22:04:35.829329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:51.256 [2024-09-30 22:04:35.829341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:51.256 [2024-09-30 22:04:35.829349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:51.256 [2024-09-30 22:04:35.829370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:51.256 [2024-09-30 22:04:35.829378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:51.256 [2024-09-30 22:04:35.829386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:51.256 [2024-09-30 22:04:35.829394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:51.256 [2024-09-30 22:04:35.829415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:51.256 [2024-09-30 22:04:35.829423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:51.256 [2024-09-30 22:04:35.829431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:51.256 [2024-09-30 22:04:35.829438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:51.256 [2024-09-30 22:04:35.829492] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.196 ms, result 0 00:22:51.256 true 00:22:51.256 22:04:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:51.256 { 00:22:51.256 "name": "ftl", 00:22:51.256 "properties": [ 00:22:51.256 { 00:22:51.256 "name": "superblock_version", 00:22:51.256 "value": 5, 00:22:51.256 "read-only": true 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "name": "base_device", 00:22:51.256 "bands": [ 00:22:51.256 { 00:22:51.256 "id": 0, 00:22:51.256 "state": "CLOSED", 00:22:51.256 "validity": 1.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 1, 00:22:51.256 "state": "CLOSED", 00:22:51.256 "validity": 1.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 2, 00:22:51.256 "state": "CLOSED", 00:22:51.256 "validity": 0.007843137254901933 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 3, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 4, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 5, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 6, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 7, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 8, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 9, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 10, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 11, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 12, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 13, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 14, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 15, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 16, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 17, 00:22:51.256 "state": "FREE", 00:22:51.256 "validity": 0.0 00:22:51.256 } 00:22:51.256 ], 00:22:51.256 "read-only": true 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "name": "cache_device", 00:22:51.256 "type": "bdev", 00:22:51.256 "chunks": [ 00:22:51.256 { 00:22:51.256 "id": 0, 00:22:51.256 "state": "INACTIVE", 00:22:51.256 "utilization": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 1, 00:22:51.256 "state": "OPEN", 00:22:51.256 "utilization": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 2, 00:22:51.256 "state": "OPEN", 00:22:51.256 "utilization": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 3, 00:22:51.256 "state": "FREE", 00:22:51.256 "utilization": 0.0 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "id": 4, 00:22:51.256 "state": "FREE", 00:22:51.256 "utilization": 0.0 00:22:51.256 } 00:22:51.256 ], 00:22:51.256 "read-only": true 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "name": "verbose_mode", 00:22:51.256 "value": true, 00:22:51.256 "unit": "", 00:22:51.256 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:51.256 }, 00:22:51.256 { 00:22:51.256 "name": "prep_upgrade_on_shutdown", 00:22:51.256 "value": false, 00:22:51.256 "unit": "", 00:22:51.256 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:51.256 } 00:22:51.256 ] 00:22:51.256 } 00:22:51.256 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:51.256 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:22:51.256 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:51.513 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:22:51.513 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:22:51.513 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:22:51.513 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:51.513 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:22:51.770 Validate MD5 checksum, iteration 1 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:51.770 22:04:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:51.770 [2024-09-30 22:04:36.509111] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:51.770 [2024-09-30 22:04:36.509234] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90249 ] 00:22:52.028 [2024-09-30 22:04:36.636760] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:52.028 [2024-09-30 22:04:36.659752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.028 [2024-09-30 22:04:36.692903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.533  Copying: 689/1024 [MB] (689 MBps) Copying: 1024/1024 [MB] (average 679 MBps) 00:22:54.533 00:22:54.533 22:04:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:22:54.533 22:04:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:22:56.432 Validate MD5 checksum, iteration 2 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4208b490fbfea3a3780032d2a6f2243a 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4208b490fbfea3a3780032d2a6f2243a != \4\2\0\8\b\4\9\0\f\b\f\e\a\3\a\3\7\8\0\0\3\2\d\2\a\6\f\2\2\4\3\a ]] 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:56.432 22:04:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:56.690 [2024-09-30 22:04:41.305967] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:22:56.690 [2024-09-30 22:04:41.306105] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90300 ] 00:22:56.690 [2024-09-30 22:04:41.433300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:56.690 [2024-09-30 22:04:41.455599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.690 [2024-09-30 22:04:41.489634] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.195  Copying: 756/1024 [MB] (756 MBps) Copying: 1024/1024 [MB] (average 734 MBps) 00:22:59.195 00:22:59.195 22:04:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:22:59.195 22:04:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:01.092 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:01.092 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b27ff9e974356efa389076b9c9d7ee1c 00:23:01.092 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b27ff9e974356efa389076b9c9d7ee1c != \b\2\7\f\f\9\e\9\7\4\3\5\6\e\f\a\3\8\9\0\7\6\b\9\c\9\d\7\e\e\1\c ]] 00:23:01.092 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:01.092 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:01.092 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:23:01.092 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 90194 ]] 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 90194 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=90356 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 90356 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 90356 ']' 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:01.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:01.093 22:04:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:01.093 [2024-09-30 22:04:45.889418] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:01.093 [2024-09-30 22:04:45.889536] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90356 ] 00:23:01.351 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 90194 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:23:01.351 [2024-09-30 22:04:46.017619] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:01.351 [2024-09-30 22:04:46.037856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.351 [2024-09-30 22:04:46.069376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.608 [2024-09-30 22:04:46.325133] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:01.608 [2024-09-30 22:04:46.325185] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:01.867 [2024-09-30 22:04:46.462915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.462955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:01.867 [2024-09-30 22:04:46.462974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:01.867 [2024-09-30 22:04:46.462983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.463035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.463046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:01.867 [2024-09-30 22:04:46.463054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:23:01.867 [2024-09-30 22:04:46.463063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.463087] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:01.867 [2024-09-30 22:04:46.463626] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:01.867 [2024-09-30 22:04:46.463663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.463676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:01.867 [2024-09-30 22:04:46.463685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.584 ms 00:23:01.867 [2024-09-30 22:04:46.463693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.464046] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:01.867 [2024-09-30 22:04:46.467417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.467448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:01.867 [2024-09-30 22:04:46.467464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.372 ms 00:23:01.867 [2024-09-30 22:04:46.467474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.468462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.468490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:01.867 [2024-09-30 22:04:46.468501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:23:01.867 [2024-09-30 22:04:46.468509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.468781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.468798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:01.867 [2024-09-30 22:04:46.468807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.212 ms 00:23:01.867 [2024-09-30 22:04:46.468814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.468849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.468857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:01.867 [2024-09-30 22:04:46.468865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:23:01.867 [2024-09-30 22:04:46.468875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.468904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.468912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:01.867 [2024-09-30 22:04:46.468920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:01.867 [2024-09-30 22:04:46.468930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.468949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:01.867 [2024-09-30 22:04:46.469802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.469823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:01.867 [2024-09-30 22:04:46.469831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.857 ms 00:23:01.867 [2024-09-30 22:04:46.469838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.469864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.469872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:01.867 [2024-09-30 22:04:46.469883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:01.867 [2024-09-30 22:04:46.469890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.469916] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:01.867 [2024-09-30 22:04:46.469933] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:01.867 [2024-09-30 22:04:46.469969] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:01.867 [2024-09-30 22:04:46.469986] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:01.867 [2024-09-30 22:04:46.470088] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:01.867 [2024-09-30 22:04:46.470100] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:01.867 [2024-09-30 22:04:46.470110] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:01.867 [2024-09-30 22:04:46.470119] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:01.867 [2024-09-30 22:04:46.470128] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:01.867 [2024-09-30 22:04:46.470136] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:01.867 [2024-09-30 22:04:46.470143] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:01.867 [2024-09-30 22:04:46.470150] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:01.867 [2024-09-30 22:04:46.470157] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:01.867 [2024-09-30 22:04:46.470163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.470170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:01.867 [2024-09-30 22:04:46.470178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:23:01.867 [2024-09-30 22:04:46.470200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.470285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.867 [2024-09-30 22:04:46.470293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:01.867 [2024-09-30 22:04:46.470300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:23:01.867 [2024-09-30 22:04:46.470313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.867 [2024-09-30 22:04:46.470412] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:01.867 [2024-09-30 22:04:46.470421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:01.867 [2024-09-30 22:04:46.470429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:01.867 [2024-09-30 22:04:46.470436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.867 [2024-09-30 22:04:46.470446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:01.867 [2024-09-30 22:04:46.470454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:01.867 [2024-09-30 22:04:46.470461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:01.867 [2024-09-30 22:04:46.470468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:01.867 [2024-09-30 22:04:46.470476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:01.867 [2024-09-30 22:04:46.470482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.867 [2024-09-30 22:04:46.470489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:01.867 [2024-09-30 22:04:46.470496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:01.867 [2024-09-30 22:04:46.470502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:01.868 [2024-09-30 22:04:46.470515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:01.868 [2024-09-30 22:04:46.470525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:01.868 [2024-09-30 22:04:46.470542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:01.868 [2024-09-30 22:04:46.470548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:01.868 [2024-09-30 22:04:46.470561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:01.868 [2024-09-30 22:04:46.470568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:01.868 [2024-09-30 22:04:46.470575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:01.868 [2024-09-30 22:04:46.470581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:01.868 [2024-09-30 22:04:46.470587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:01.868 [2024-09-30 22:04:46.470593] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:01.868 [2024-09-30 22:04:46.470599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:01.868 [2024-09-30 22:04:46.470606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:01.868 [2024-09-30 22:04:46.470612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:01.868 [2024-09-30 22:04:46.470618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:01.868 [2024-09-30 22:04:46.470624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:01.868 [2024-09-30 22:04:46.470632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:01.868 [2024-09-30 22:04:46.470639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:01.868 [2024-09-30 22:04:46.470645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:01.868 [2024-09-30 22:04:46.470657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:01.868 [2024-09-30 22:04:46.470664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:01.868 [2024-09-30 22:04:46.470679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:01.868 [2024-09-30 22:04:46.470698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:01.868 [2024-09-30 22:04:46.470704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470711] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:01.868 [2024-09-30 22:04:46.470719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:01.868 [2024-09-30 22:04:46.470731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:01.868 [2024-09-30 22:04:46.470738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.868 [2024-09-30 22:04:46.470747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:01.868 [2024-09-30 22:04:46.470753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:01.868 [2024-09-30 22:04:46.470759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:01.868 [2024-09-30 22:04:46.470766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:01.868 [2024-09-30 22:04:46.470772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:01.868 [2024-09-30 22:04:46.470778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:01.868 [2024-09-30 22:04:46.470786] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:01.868 [2024-09-30 22:04:46.470798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:01.868 [2024-09-30 22:04:46.470813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:01.868 [2024-09-30 22:04:46.470834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:01.868 [2024-09-30 22:04:46.470841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:01.868 [2024-09-30 22:04:46.470848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:01.868 [2024-09-30 22:04:46.470855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:01.868 [2024-09-30 22:04:46.470906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:01.868 [2024-09-30 22:04:46.470914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:01.868 [2024-09-30 22:04:46.470929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:01.868 [2024-09-30 22:04:46.470937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:01.868 [2024-09-30 22:04:46.470943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:01.868 [2024-09-30 22:04:46.470951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.470958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:01.868 [2024-09-30 22:04:46.470965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.607 ms 00:23:01.868 [2024-09-30 22:04:46.470977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.477858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.477880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:01.868 [2024-09-30 22:04:46.477888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.834 ms 00:23:01.868 [2024-09-30 22:04:46.477896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.477927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.477937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:01.868 [2024-09-30 22:04:46.477945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:23:01.868 [2024-09-30 22:04:46.477952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.494604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.494654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:01.868 [2024-09-30 22:04:46.494671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.607 ms 00:23:01.868 [2024-09-30 22:04:46.494682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.494727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.494740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:01.868 [2024-09-30 22:04:46.494761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:01.868 [2024-09-30 22:04:46.494775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.494922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.494944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:01.868 [2024-09-30 22:04:46.494956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:23:01.868 [2024-09-30 22:04:46.494971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.495027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.495039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:01.868 [2024-09-30 22:04:46.495050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:23:01.868 [2024-09-30 22:04:46.495060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.501583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.501619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:01.868 [2024-09-30 22:04:46.501632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.491 ms 00:23:01.868 [2024-09-30 22:04:46.501642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.501750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.501771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:23:01.868 [2024-09-30 22:04:46.501783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:01.868 [2024-09-30 22:04:46.501793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.868 [2024-09-30 22:04:46.505817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.868 [2024-09-30 22:04:46.505860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:23:01.868 [2024-09-30 22:04:46.505874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.996 ms 00:23:01.869 [2024-09-30 22:04:46.505884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.869 [2024-09-30 22:04:46.507427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.869 [2024-09-30 22:04:46.507465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:01.869 [2024-09-30 22:04:46.507479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:23:01.869 [2024-09-30 22:04:46.507489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.869 [2024-09-30 22:04:46.522449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.869 [2024-09-30 22:04:46.522484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:01.869 [2024-09-30 22:04:46.522494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.887 ms 00:23:01.869 [2024-09-30 22:04:46.522501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.869 [2024-09-30 22:04:46.522617] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:23:01.869 [2024-09-30 22:04:46.522703] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:23:01.869 [2024-09-30 22:04:46.522779] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:23:01.869 [2024-09-30 22:04:46.522856] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:23:01.869 [2024-09-30 22:04:46.522864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.869 [2024-09-30 22:04:46.522872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:23:01.869 [2024-09-30 22:04:46.522880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:23:01.869 [2024-09-30 22:04:46.522887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.869 [2024-09-30 22:04:46.522938] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:23:01.869 [2024-09-30 22:04:46.522948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.869 [2024-09-30 22:04:46.522955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:23:01.869 [2024-09-30 22:04:46.522963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:23:01.869 [2024-09-30 22:04:46.522971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.869 [2024-09-30 22:04:46.525182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.869 [2024-09-30 22:04:46.525229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:23:01.869 [2024-09-30 22:04:46.525238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.192 ms 00:23:01.869 [2024-09-30 22:04:46.525245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.869 [2024-09-30 22:04:46.525827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.869 [2024-09-30 22:04:46.525856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:23:01.869 [2024-09-30 22:04:46.525867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:23:01.869 [2024-09-30 22:04:46.525874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.869 [2024-09-30 22:04:46.525922] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:23:01.869 [2024-09-30 22:04:46.526067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.869 [2024-09-30 22:04:46.526082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:01.869 [2024-09-30 22:04:46.526090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.146 ms 00:23:01.869 [2024-09-30 22:04:46.526097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.434 [2024-09-30 22:04:46.950332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.434 [2024-09-30 22:04:46.950391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:02.434 [2024-09-30 22:04:46.950405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 423.939 ms 00:23:02.434 [2024-09-30 22:04:46.950413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.434 [2024-09-30 22:04:46.951676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.434 [2024-09-30 22:04:46.951708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:02.434 [2024-09-30 22:04:46.951721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.868 ms 00:23:02.434 [2024-09-30 22:04:46.951734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.434 [2024-09-30 22:04:46.952102] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:23:02.434 [2024-09-30 22:04:46.952129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.434 [2024-09-30 22:04:46.952138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:02.434 [2024-09-30 22:04:46.952147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:23:02.434 [2024-09-30 22:04:46.952154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.434 [2024-09-30 22:04:46.952183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.434 [2024-09-30 22:04:46.952205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:02.434 [2024-09-30 22:04:46.952213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:02.434 [2024-09-30 22:04:46.952225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.434 [2024-09-30 22:04:46.952258] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 426.334 ms, result 0 00:23:02.434 [2024-09-30 22:04:46.952294] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:23:02.434 [2024-09-30 22:04:46.952377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.434 [2024-09-30 22:04:46.952386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:02.434 [2024-09-30 22:04:46.952401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:23:02.434 [2024-09-30 22:04:46.952408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.369334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.369383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:02.692 [2024-09-30 22:04:47.369397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 416.554 ms 00:23:02.692 [2024-09-30 22:04:47.369405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.370630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.370661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:02.692 [2024-09-30 22:04:47.370672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.776 ms 00:23:02.692 [2024-09-30 22:04:47.370682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.371007] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:23:02.692 [2024-09-30 22:04:47.371032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.371040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:02.692 [2024-09-30 22:04:47.371048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.325 ms 00:23:02.692 [2024-09-30 22:04:47.371055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.371111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.371120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:02.692 [2024-09-30 22:04:47.371128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:02.692 [2024-09-30 22:04:47.371135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.371169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 418.870 ms, result 0 00:23:02.692 [2024-09-30 22:04:47.371219] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:02.692 [2024-09-30 22:04:47.371231] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:02.692 [2024-09-30 22:04:47.371241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.371248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:23:02.692 [2024-09-30 22:04:47.371256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 845.334 ms 00:23:02.692 [2024-09-30 22:04:47.371263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.371295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.371303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:23:02.692 [2024-09-30 22:04:47.371310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:02.692 [2024-09-30 22:04:47.371317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.379053] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:02.692 [2024-09-30 22:04:47.379147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.379157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:02.692 [2024-09-30 22:04:47.379166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.814 ms 00:23:02.692 [2024-09-30 22:04:47.379178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.379859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.379884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:23:02.692 [2024-09-30 22:04:47.379893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.605 ms 00:23:02.692 [2024-09-30 22:04:47.379901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.382154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.382171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:23:02.692 [2024-09-30 22:04:47.382184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.230 ms 00:23:02.692 [2024-09-30 22:04:47.382199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.382234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.382243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:23:02.692 [2024-09-30 22:04:47.382250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:02.692 [2024-09-30 22:04:47.382257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.382355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.382369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:02.692 [2024-09-30 22:04:47.382377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:23:02.692 [2024-09-30 22:04:47.382384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.382406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.382413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:02.692 [2024-09-30 22:04:47.382421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:02.692 [2024-09-30 22:04:47.382428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.382457] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:02.692 [2024-09-30 22:04:47.382466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.382473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:02.692 [2024-09-30 22:04:47.382480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:23:02.692 [2024-09-30 22:04:47.382491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.692 [2024-09-30 22:04:47.382544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:02.692 [2024-09-30 22:04:47.382552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:02.693 [2024-09-30 22:04:47.382560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:23:02.693 [2024-09-30 22:04:47.382567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:02.693 [2024-09-30 22:04:47.383494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 920.186 ms, result 0 00:23:02.693 [2024-09-30 22:04:47.395789] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.693 [2024-09-30 22:04:47.411777] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:02.693 [2024-09-30 22:04:47.419882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:02.693 Validate MD5 checksum, iteration 1 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:02.693 22:04:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:02.950 [2024-09-30 22:04:47.505147] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:02.950 [2024-09-30 22:04:47.505272] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90377 ] 00:23:02.950 [2024-09-30 22:04:47.632617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:02.950 [2024-09-30 22:04:47.653167] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.950 [2024-09-30 22:04:47.686486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.082  Copying: 737/1024 [MB] (737 MBps) Copying: 1024/1024 [MB] (average 735 MBps) 00:23:09.082 00:23:09.082 22:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:09.082 22:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=4208b490fbfea3a3780032d2a6f2243a 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 4208b490fbfea3a3780032d2a6f2243a != \4\2\0\8\b\4\9\0\f\b\f\e\a\3\a\3\7\8\0\0\3\2\d\2\a\6\f\2\2\4\3\a ]] 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:11.012 Validate MD5 checksum, iteration 2 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:11.012 22:04:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:11.012 [2024-09-30 22:04:55.718023] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:11.012 [2024-09-30 22:04:55.718136] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90466 ] 00:23:11.270 [2024-09-30 22:04:55.845357] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:11.270 [2024-09-30 22:04:55.862811] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.270 [2024-09-30 22:04:55.893816] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.805  Copying: 689/1024 [MB] (689 MBps) Copying: 1024/1024 [MB] (average 681 MBps) 00:23:14.805 00:23:14.805 22:04:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:14.805 22:04:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=b27ff9e974356efa389076b9c9d7ee1c 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ b27ff9e974356efa389076b9c9d7ee1c != \b\2\7\f\f\9\e\9\7\4\3\5\6\e\f\a\3\8\9\0\7\6\b\9\c\9\d\7\e\e\1\c ]] 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 90356 ]] 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 90356 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 90356 ']' 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 90356 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90356 00:23:17.339 killing process with pid 90356 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90356' 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 90356 00:23:17.339 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 90356 00:23:17.339 [2024-09-30 22:05:01.773446] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:17.339 [2024-09-30 22:05:01.778478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.339 [2024-09-30 22:05:01.778512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:17.339 [2024-09-30 22:05:01.778523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:17.339 [2024-09-30 22:05:01.778529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.339 [2024-09-30 22:05:01.778545] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:17.340 [2024-09-30 22:05:01.778938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.778956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:17.340 [2024-09-30 22:05:01.778964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.383 ms 00:23:17.340 [2024-09-30 22:05:01.778970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.779145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.779160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:17.340 [2024-09-30 22:05:01.779166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.160 ms 00:23:17.340 [2024-09-30 22:05:01.779172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.780507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.780531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:17.340 [2024-09-30 22:05:01.780539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.323 ms 00:23:17.340 [2024-09-30 22:05:01.780545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.781418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.781435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:17.340 [2024-09-30 22:05:01.781446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.851 ms 00:23:17.340 [2024-09-30 22:05:01.781453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.782884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.782912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:17.340 [2024-09-30 22:05:01.782919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.395 ms 00:23:17.340 [2024-09-30 22:05:01.782925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.784133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.784164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:17.340 [2024-09-30 22:05:01.784176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.181 ms 00:23:17.340 [2024-09-30 22:05:01.784182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.784256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.784264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:17.340 [2024-09-30 22:05:01.784271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:23:17.340 [2024-09-30 22:05:01.784276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.785124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.785237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:17.340 [2024-09-30 22:05:01.785249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:23:17.340 [2024-09-30 22:05:01.785255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.786239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.786260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:17.340 [2024-09-30 22:05:01.786267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.960 ms 00:23:17.340 [2024-09-30 22:05:01.786272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.787410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.787437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:17.340 [2024-09-30 22:05:01.787443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.113 ms 00:23:17.340 [2024-09-30 22:05:01.787448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.788606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.788697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:17.340 [2024-09-30 22:05:01.788708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.111 ms 00:23:17.340 [2024-09-30 22:05:01.788714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.788736] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:17.340 [2024-09-30 22:05:01.788746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:17.340 [2024-09-30 22:05:01.788753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:17.340 [2024-09-30 22:05:01.788760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:17.340 [2024-09-30 22:05:01.788766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:17.340 [2024-09-30 22:05:01.788853] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:17.340 [2024-09-30 22:05:01.788862] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 04fea156-4f1d-44df-ae9c-f9b8d05603b4 00:23:17.340 [2024-09-30 22:05:01.788868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:17.340 [2024-09-30 22:05:01.788874] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:23:17.340 [2024-09-30 22:05:01.788879] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:23:17.340 [2024-09-30 22:05:01.788885] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:23:17.340 [2024-09-30 22:05:01.788890] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:17.340 [2024-09-30 22:05:01.788896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:17.340 [2024-09-30 22:05:01.788902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:17.340 [2024-09-30 22:05:01.788907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:17.340 [2024-09-30 22:05:01.788912] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:17.340 [2024-09-30 22:05:01.788917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.788924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:17.340 [2024-09-30 22:05:01.788929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:23:17.340 [2024-09-30 22:05:01.788935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.790210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.790232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:17.340 [2024-09-30 22:05:01.790239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.260 ms 00:23:17.340 [2024-09-30 22:05:01.790245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.790317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.340 [2024-09-30 22:05:01.790324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:17.340 [2024-09-30 22:05:01.790332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:23:17.340 [2024-09-30 22:05:01.790337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.795083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.340 [2024-09-30 22:05:01.795109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:17.340 [2024-09-30 22:05:01.795122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.340 [2024-09-30 22:05:01.795128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.795150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.340 [2024-09-30 22:05:01.795157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:17.340 [2024-09-30 22:05:01.795164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.340 [2024-09-30 22:05:01.795170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.795239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.340 [2024-09-30 22:05:01.795248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:17.340 [2024-09-30 22:05:01.795253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.340 [2024-09-30 22:05:01.795259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.795272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.340 [2024-09-30 22:05:01.795279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:17.340 [2024-09-30 22:05:01.795284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.340 [2024-09-30 22:05:01.795292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.803378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.340 [2024-09-30 22:05:01.803504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:17.340 [2024-09-30 22:05:01.803522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.340 [2024-09-30 22:05:01.803529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.340 [2024-09-30 22:05:01.810049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.340 [2024-09-30 22:05:01.810079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:17.340 [2024-09-30 22:05:01.810087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.341 [2024-09-30 22:05:01.810102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.341 [2024-09-30 22:05:01.810134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.341 [2024-09-30 22:05:01.810141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:17.341 [2024-09-30 22:05:01.810147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.341 [2024-09-30 22:05:01.810153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.341 [2024-09-30 22:05:01.810212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.341 [2024-09-30 22:05:01.810220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:17.341 [2024-09-30 22:05:01.810229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.341 [2024-09-30 22:05:01.810234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.341 [2024-09-30 22:05:01.810289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.341 [2024-09-30 22:05:01.810297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:17.341 [2024-09-30 22:05:01.810303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.341 [2024-09-30 22:05:01.810308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.341 [2024-09-30 22:05:01.810330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.341 [2024-09-30 22:05:01.810337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:17.341 [2024-09-30 22:05:01.810343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.341 [2024-09-30 22:05:01.810348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.341 [2024-09-30 22:05:01.810379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.341 [2024-09-30 22:05:01.810386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:17.341 [2024-09-30 22:05:01.810392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.341 [2024-09-30 22:05:01.810401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.341 [2024-09-30 22:05:01.810432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:17.341 [2024-09-30 22:05:01.810440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:17.341 [2024-09-30 22:05:01.810446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:17.341 [2024-09-30 22:05:01.810452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.341 [2024-09-30 22:05:01.810548] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 32.046 ms, result 0 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:17.341 Remove shared memory files 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid90194 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:17.341 ************************************ 00:23:17.341 END TEST ftl_upgrade_shutdown 00:23:17.341 ************************************ 00:23:17.341 00:23:17.341 real 1m9.184s 00:23:17.341 user 1m33.831s 00:23:17.341 sys 0m17.474s 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:17.341 22:05:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.341 22:05:02 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:23:17.341 22:05:02 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:17.341 22:05:02 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:23:17.341 22:05:02 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:17.341 22:05:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:17.341 ************************************ 00:23:17.341 START TEST ftl_restore_fast 00:23:17.341 ************************************ 00:23:17.341 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:17.341 * Looking for test storage... 00:23:17.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.341 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:17.341 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:23:17.341 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:17.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.600 --rc genhtml_branch_coverage=1 00:23:17.600 --rc genhtml_function_coverage=1 00:23:17.600 --rc genhtml_legend=1 00:23:17.600 --rc geninfo_all_blocks=1 00:23:17.600 --rc geninfo_unexecuted_blocks=1 00:23:17.600 00:23:17.600 ' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:17.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.600 --rc genhtml_branch_coverage=1 00:23:17.600 --rc genhtml_function_coverage=1 00:23:17.600 --rc genhtml_legend=1 00:23:17.600 --rc geninfo_all_blocks=1 00:23:17.600 --rc geninfo_unexecuted_blocks=1 00:23:17.600 00:23:17.600 ' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:17.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.600 --rc genhtml_branch_coverage=1 00:23:17.600 --rc genhtml_function_coverage=1 00:23:17.600 --rc genhtml_legend=1 00:23:17.600 --rc geninfo_all_blocks=1 00:23:17.600 --rc geninfo_unexecuted_blocks=1 00:23:17.600 00:23:17.600 ' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:17.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.600 --rc genhtml_branch_coverage=1 00:23:17.600 --rc genhtml_function_coverage=1 00:23:17.600 --rc genhtml_legend=1 00:23:17.600 --rc geninfo_all_blocks=1 00:23:17.600 --rc geninfo_unexecuted_blocks=1 00:23:17.600 00:23:17.600 ' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.600 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.fwU3UUuAxe 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=90616 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 90616 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 90616 ']' 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:17.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:17.601 22:05:02 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:23:17.601 [2024-09-30 22:05:02.257820] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:17.601 [2024-09-30 22:05:02.258378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90616 ] 00:23:17.601 [2024-09-30 22:05:02.386242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:17.601 [2024-09-30 22:05:02.403465] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.859 [2024-09-30 22:05:02.435366] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.425 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:18.425 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:23:18.425 22:05:03 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:18.425 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:23:18.425 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:18.425 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:23:18.425 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:23:18.425 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:18.684 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:18.684 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:23:18.684 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:18.684 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:18.684 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:18.684 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:18.684 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:18.684 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:18.943 { 00:23:18.943 "name": "nvme0n1", 00:23:18.943 "aliases": [ 00:23:18.943 "714268d1-afda-4b34-90a7-e7021810035c" 00:23:18.943 ], 00:23:18.943 "product_name": "NVMe disk", 00:23:18.943 "block_size": 4096, 00:23:18.943 "num_blocks": 1310720, 00:23:18.943 "uuid": "714268d1-afda-4b34-90a7-e7021810035c", 00:23:18.943 "numa_id": -1, 00:23:18.943 "assigned_rate_limits": { 00:23:18.943 "rw_ios_per_sec": 0, 00:23:18.943 "rw_mbytes_per_sec": 0, 00:23:18.943 "r_mbytes_per_sec": 0, 00:23:18.943 "w_mbytes_per_sec": 0 00:23:18.943 }, 00:23:18.943 "claimed": true, 00:23:18.943 "claim_type": "read_many_write_one", 00:23:18.943 "zoned": false, 00:23:18.943 "supported_io_types": { 00:23:18.943 "read": true, 00:23:18.943 "write": true, 00:23:18.943 "unmap": true, 00:23:18.943 "flush": true, 00:23:18.943 "reset": true, 00:23:18.943 "nvme_admin": true, 00:23:18.943 "nvme_io": true, 00:23:18.943 "nvme_io_md": false, 00:23:18.943 "write_zeroes": true, 00:23:18.943 "zcopy": false, 00:23:18.943 "get_zone_info": false, 00:23:18.943 "zone_management": false, 00:23:18.943 "zone_append": false, 00:23:18.943 "compare": true, 00:23:18.943 "compare_and_write": false, 00:23:18.943 "abort": true, 00:23:18.943 "seek_hole": false, 00:23:18.943 "seek_data": false, 00:23:18.943 "copy": true, 00:23:18.943 "nvme_iov_md": false 00:23:18.943 }, 00:23:18.943 "driver_specific": { 00:23:18.943 "nvme": [ 00:23:18.943 { 00:23:18.943 "pci_address": "0000:00:11.0", 00:23:18.943 "trid": { 00:23:18.943 "trtype": "PCIe", 00:23:18.943 "traddr": "0000:00:11.0" 00:23:18.943 }, 00:23:18.943 "ctrlr_data": { 00:23:18.943 "cntlid": 0, 00:23:18.943 "vendor_id": "0x1b36", 00:23:18.943 "model_number": "QEMU NVMe Ctrl", 00:23:18.943 "serial_number": "12341", 00:23:18.943 "firmware_revision": "8.0.0", 00:23:18.943 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:18.943 "oacs": { 00:23:18.943 "security": 0, 00:23:18.943 "format": 1, 00:23:18.943 "firmware": 0, 00:23:18.943 "ns_manage": 1 00:23:18.943 }, 00:23:18.943 "multi_ctrlr": false, 00:23:18.943 "ana_reporting": false 00:23:18.943 }, 00:23:18.943 "vs": { 00:23:18.943 "nvme_version": "1.4" 00:23:18.943 }, 00:23:18.943 "ns_data": { 00:23:18.943 "id": 1, 00:23:18.943 "can_share": false 00:23:18.943 } 00:23:18.943 } 00:23:18.943 ], 00:23:18.943 "mp_policy": "active_passive" 00:23:18.943 } 00:23:18.943 } 00:23:18.943 ]' 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:18.943 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:19.202 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=16d9a5d5-2524-4766-bb35-b558f0c45b3e 00:23:19.202 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:23:19.202 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 16d9a5d5-2524-4766-bb35-b558f0c45b3e 00:23:19.202 22:05:03 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:19.460 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=9e21a614-2045-458d-91c1-63a00a2877e8 00:23:19.460 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9e21a614-2045-458d-91c1-63a00a2877e8 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:19.718 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:19.718 { 00:23:19.718 "name": "0e98a1e4-04c3-4a01-b84b-8ebf22cdbead", 00:23:19.718 "aliases": [ 00:23:19.718 "lvs/nvme0n1p0" 00:23:19.718 ], 00:23:19.718 "product_name": "Logical Volume", 00:23:19.718 "block_size": 4096, 00:23:19.718 "num_blocks": 26476544, 00:23:19.718 "uuid": "0e98a1e4-04c3-4a01-b84b-8ebf22cdbead", 00:23:19.718 "assigned_rate_limits": { 00:23:19.718 "rw_ios_per_sec": 0, 00:23:19.718 "rw_mbytes_per_sec": 0, 00:23:19.718 "r_mbytes_per_sec": 0, 00:23:19.718 "w_mbytes_per_sec": 0 00:23:19.718 }, 00:23:19.718 "claimed": false, 00:23:19.718 "zoned": false, 00:23:19.718 "supported_io_types": { 00:23:19.718 "read": true, 00:23:19.718 "write": true, 00:23:19.718 "unmap": true, 00:23:19.718 "flush": false, 00:23:19.718 "reset": true, 00:23:19.718 "nvme_admin": false, 00:23:19.718 "nvme_io": false, 00:23:19.718 "nvme_io_md": false, 00:23:19.718 "write_zeroes": true, 00:23:19.718 "zcopy": false, 00:23:19.718 "get_zone_info": false, 00:23:19.718 "zone_management": false, 00:23:19.718 "zone_append": false, 00:23:19.718 "compare": false, 00:23:19.718 "compare_and_write": false, 00:23:19.718 "abort": false, 00:23:19.718 "seek_hole": true, 00:23:19.718 "seek_data": true, 00:23:19.718 "copy": false, 00:23:19.719 "nvme_iov_md": false 00:23:19.719 }, 00:23:19.719 "driver_specific": { 00:23:19.719 "lvol": { 00:23:19.719 "lvol_store_uuid": "9e21a614-2045-458d-91c1-63a00a2877e8", 00:23:19.719 "base_bdev": "nvme0n1", 00:23:19.719 "thin_provision": true, 00:23:19.719 "num_allocated_clusters": 0, 00:23:19.719 "snapshot": false, 00:23:19.719 "clone": false, 00:23:19.719 "esnap_clone": false 00:23:19.719 } 00:23:19.719 } 00:23:19.719 } 00:23:19.719 ]' 00:23:19.719 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:19.719 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:19.719 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:19.977 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:19.977 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:19.977 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:19.977 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:23:19.977 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:23:19.977 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:20.236 { 00:23:20.236 "name": "0e98a1e4-04c3-4a01-b84b-8ebf22cdbead", 00:23:20.236 "aliases": [ 00:23:20.236 "lvs/nvme0n1p0" 00:23:20.236 ], 00:23:20.236 "product_name": "Logical Volume", 00:23:20.236 "block_size": 4096, 00:23:20.236 "num_blocks": 26476544, 00:23:20.236 "uuid": "0e98a1e4-04c3-4a01-b84b-8ebf22cdbead", 00:23:20.236 "assigned_rate_limits": { 00:23:20.236 "rw_ios_per_sec": 0, 00:23:20.236 "rw_mbytes_per_sec": 0, 00:23:20.236 "r_mbytes_per_sec": 0, 00:23:20.236 "w_mbytes_per_sec": 0 00:23:20.236 }, 00:23:20.236 "claimed": false, 00:23:20.236 "zoned": false, 00:23:20.236 "supported_io_types": { 00:23:20.236 "read": true, 00:23:20.236 "write": true, 00:23:20.236 "unmap": true, 00:23:20.236 "flush": false, 00:23:20.236 "reset": true, 00:23:20.236 "nvme_admin": false, 00:23:20.236 "nvme_io": false, 00:23:20.236 "nvme_io_md": false, 00:23:20.236 "write_zeroes": true, 00:23:20.236 "zcopy": false, 00:23:20.236 "get_zone_info": false, 00:23:20.236 "zone_management": false, 00:23:20.236 "zone_append": false, 00:23:20.236 "compare": false, 00:23:20.236 "compare_and_write": false, 00:23:20.236 "abort": false, 00:23:20.236 "seek_hole": true, 00:23:20.236 "seek_data": true, 00:23:20.236 "copy": false, 00:23:20.236 "nvme_iov_md": false 00:23:20.236 }, 00:23:20.236 "driver_specific": { 00:23:20.236 "lvol": { 00:23:20.236 "lvol_store_uuid": "9e21a614-2045-458d-91c1-63a00a2877e8", 00:23:20.236 "base_bdev": "nvme0n1", 00:23:20.236 "thin_provision": true, 00:23:20.236 "num_allocated_clusters": 0, 00:23:20.236 "snapshot": false, 00:23:20.236 "clone": false, 00:23:20.236 "esnap_clone": false 00:23:20.236 } 00:23:20.236 } 00:23:20.236 } 00:23:20.236 ]' 00:23:20.236 22:05:04 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:20.236 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:20.236 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:20.494 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:20.753 { 00:23:20.753 "name": "0e98a1e4-04c3-4a01-b84b-8ebf22cdbead", 00:23:20.753 "aliases": [ 00:23:20.753 "lvs/nvme0n1p0" 00:23:20.753 ], 00:23:20.753 "product_name": "Logical Volume", 00:23:20.753 "block_size": 4096, 00:23:20.753 "num_blocks": 26476544, 00:23:20.753 "uuid": "0e98a1e4-04c3-4a01-b84b-8ebf22cdbead", 00:23:20.753 "assigned_rate_limits": { 00:23:20.753 "rw_ios_per_sec": 0, 00:23:20.753 "rw_mbytes_per_sec": 0, 00:23:20.753 "r_mbytes_per_sec": 0, 00:23:20.753 "w_mbytes_per_sec": 0 00:23:20.753 }, 00:23:20.753 "claimed": false, 00:23:20.753 "zoned": false, 00:23:20.753 "supported_io_types": { 00:23:20.753 "read": true, 00:23:20.753 "write": true, 00:23:20.753 "unmap": true, 00:23:20.753 "flush": false, 00:23:20.753 "reset": true, 00:23:20.753 "nvme_admin": false, 00:23:20.753 "nvme_io": false, 00:23:20.753 "nvme_io_md": false, 00:23:20.753 "write_zeroes": true, 00:23:20.753 "zcopy": false, 00:23:20.753 "get_zone_info": false, 00:23:20.753 "zone_management": false, 00:23:20.753 "zone_append": false, 00:23:20.753 "compare": false, 00:23:20.753 "compare_and_write": false, 00:23:20.753 "abort": false, 00:23:20.753 "seek_hole": true, 00:23:20.753 "seek_data": true, 00:23:20.753 "copy": false, 00:23:20.753 "nvme_iov_md": false 00:23:20.753 }, 00:23:20.753 "driver_specific": { 00:23:20.753 "lvol": { 00:23:20.753 "lvol_store_uuid": "9e21a614-2045-458d-91c1-63a00a2877e8", 00:23:20.753 "base_bdev": "nvme0n1", 00:23:20.753 "thin_provision": true, 00:23:20.753 "num_allocated_clusters": 0, 00:23:20.753 "snapshot": false, 00:23:20.753 "clone": false, 00:23:20.753 "esnap_clone": false 00:23:20.753 } 00:23:20.753 } 00:23:20.753 } 00:23:20.753 ]' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead --l2p_dram_limit 10' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:23:20.753 22:05:05 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0e98a1e4-04c3-4a01-b84b-8ebf22cdbead --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:23:21.013 [2024-09-30 22:05:05.667815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.667861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:21.013 [2024-09-30 22:05:05.667876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:21.013 [2024-09-30 22:05:05.667884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.667952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.667962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.013 [2024-09-30 22:05:05.667974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:21.013 [2024-09-30 22:05:05.667986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.668010] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:21.013 [2024-09-30 22:05:05.668282] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:21.013 [2024-09-30 22:05:05.668300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.668310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.013 [2024-09-30 22:05:05.668320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:23:21.013 [2024-09-30 22:05:05.668327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.668363] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e32ab2ae-aa10-4127-98b2-3339952528db 00:23:21.013 [2024-09-30 22:05:05.669477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.669504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:21.013 [2024-09-30 22:05:05.669513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:21.013 [2024-09-30 22:05:05.669525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.674890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.674924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.013 [2024-09-30 22:05:05.674934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.289 ms 00:23:21.013 [2024-09-30 22:05:05.674946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.675036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.675046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.013 [2024-09-30 22:05:05.675057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:21.013 [2024-09-30 22:05:05.675067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.675108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.675119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:21.013 [2024-09-30 22:05:05.675127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:21.013 [2024-09-30 22:05:05.675136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.675157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:21.013 [2024-09-30 22:05:05.676651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.676682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.013 [2024-09-30 22:05:05.676697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.498 ms 00:23:21.013 [2024-09-30 22:05:05.676704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.676737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.013 [2024-09-30 22:05:05.676744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:21.013 [2024-09-30 22:05:05.676759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:21.013 [2024-09-30 22:05:05.676766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.013 [2024-09-30 22:05:05.676783] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:21.013 [2024-09-30 22:05:05.676919] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:21.013 [2024-09-30 22:05:05.676933] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:21.013 [2024-09-30 22:05:05.676943] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:21.013 [2024-09-30 22:05:05.676956] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:21.013 [2024-09-30 22:05:05.676968] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:21.013 [2024-09-30 22:05:05.676983] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:21.013 [2024-09-30 22:05:05.676990] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:21.013 [2024-09-30 22:05:05.676999] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:21.013 [2024-09-30 22:05:05.677007] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:21.014 [2024-09-30 22:05:05.677016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.014 [2024-09-30 22:05:05.677023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:21.014 [2024-09-30 22:05:05.677033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:23:21.014 [2024-09-30 22:05:05.677040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.014 [2024-09-30 22:05:05.677125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.014 [2024-09-30 22:05:05.677133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:21.014 [2024-09-30 22:05:05.677141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:21.014 [2024-09-30 22:05:05.677148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.014 [2024-09-30 22:05:05.677257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:21.014 [2024-09-30 22:05:05.677267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:21.014 [2024-09-30 22:05:05.677276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:21.014 [2024-09-30 22:05:05.677302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:21.014 [2024-09-30 22:05:05.677329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.014 [2024-09-30 22:05:05.677346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:21.014 [2024-09-30 22:05:05.677440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:21.014 [2024-09-30 22:05:05.677452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.014 [2024-09-30 22:05:05.677459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:21.014 [2024-09-30 22:05:05.677469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:21.014 [2024-09-30 22:05:05.677476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:21.014 [2024-09-30 22:05:05.677493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:21.014 [2024-09-30 22:05:05.677521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:21.014 [2024-09-30 22:05:05.677545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:21.014 [2024-09-30 22:05:05.677571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:21.014 [2024-09-30 22:05:05.677597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:21.014 [2024-09-30 22:05:05.677623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.014 [2024-09-30 22:05:05.677640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:21.014 [2024-09-30 22:05:05.677647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:21.014 [2024-09-30 22:05:05.677656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.014 [2024-09-30 22:05:05.677664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:21.014 [2024-09-30 22:05:05.677673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:21.014 [2024-09-30 22:05:05.677680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:21.014 [2024-09-30 22:05:05.677695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:21.014 [2024-09-30 22:05:05.677702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677709] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:21.014 [2024-09-30 22:05:05.677719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:21.014 [2024-09-30 22:05:05.677726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.014 [2024-09-30 22:05:05.677742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:21.014 [2024-09-30 22:05:05.677751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:21.014 [2024-09-30 22:05:05.677757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:21.014 [2024-09-30 22:05:05.677767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:21.014 [2024-09-30 22:05:05.677773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:21.014 [2024-09-30 22:05:05.677781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:21.014 [2024-09-30 22:05:05.677791] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:21.014 [2024-09-30 22:05:05.677802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.014 [2024-09-30 22:05:05.677813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:21.014 [2024-09-30 22:05:05.677822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:21.014 [2024-09-30 22:05:05.677829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:21.014 [2024-09-30 22:05:05.677837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:21.014 [2024-09-30 22:05:05.677844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:21.014 [2024-09-30 22:05:05.677854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:21.014 [2024-09-30 22:05:05.677860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:21.014 [2024-09-30 22:05:05.677869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:21.014 [2024-09-30 22:05:05.677876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:21.014 [2024-09-30 22:05:05.677884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:21.014 [2024-09-30 22:05:05.677891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:21.014 [2024-09-30 22:05:05.677900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:21.014 [2024-09-30 22:05:05.677906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:21.014 [2024-09-30 22:05:05.677915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:21.014 [2024-09-30 22:05:05.677922] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:21.014 [2024-09-30 22:05:05.677931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.014 [2024-09-30 22:05:05.677939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:21.014 [2024-09-30 22:05:05.677948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:21.014 [2024-09-30 22:05:05.677954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:21.014 [2024-09-30 22:05:05.677963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:21.014 [2024-09-30 22:05:05.677971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.014 [2024-09-30 22:05:05.677980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:21.014 [2024-09-30 22:05:05.677988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:23:21.014 [2024-09-30 22:05:05.677997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.014 [2024-09-30 22:05:05.678034] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:21.014 [2024-09-30 22:05:05.678045] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:23.552 [2024-09-30 22:05:07.914346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.914548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:23.552 [2024-09-30 22:05:07.914572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2236.303 ms 00:23:23.552 [2024-09-30 22:05:07.914583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:07.923038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.923081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:23.552 [2024-09-30 22:05:07.923094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.393 ms 00:23:23.552 [2024-09-30 22:05:07.923107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:07.923200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.923213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:23.552 [2024-09-30 22:05:07.923222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:23.552 [2024-09-30 22:05:07.923231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:07.931161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.931212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:23.552 [2024-09-30 22:05:07.931222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.886 ms 00:23:23.552 [2024-09-30 22:05:07.931242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:07.931269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.931279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:23.552 [2024-09-30 22:05:07.931287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:23.552 [2024-09-30 22:05:07.931295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:07.931610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.931628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:23.552 [2024-09-30 22:05:07.931636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:23:23.552 [2024-09-30 22:05:07.931651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:07.931750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.931765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:23.552 [2024-09-30 22:05:07.931780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:23.552 [2024-09-30 22:05:07.931789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:07.949749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.949800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:23.552 [2024-09-30 22:05:07.949815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.940 ms 00:23:23.552 [2024-09-30 22:05:07.949828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:07.959172] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:23.552 [2024-09-30 22:05:07.962058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:07.962090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:23.552 [2024-09-30 22:05:07.962110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.128 ms 00:23:23.552 [2024-09-30 22:05:07.962118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.011442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.011518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:23.552 [2024-09-30 22:05:08.011550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.288 ms 00:23:23.552 [2024-09-30 22:05:08.011572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.011956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.011997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:23.552 [2024-09-30 22:05:08.012018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:23:23.552 [2024-09-30 22:05:08.012033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.017646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.017951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:23.552 [2024-09-30 22:05:08.018001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.538 ms 00:23:23.552 [2024-09-30 22:05:08.018022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.023451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.023520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:23.552 [2024-09-30 22:05:08.023550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.362 ms 00:23:23.552 [2024-09-30 22:05:08.023568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.024394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.024552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:23.552 [2024-09-30 22:05:08.024571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:23:23.552 [2024-09-30 22:05:08.024583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.050664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.050767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:23.552 [2024-09-30 22:05:08.050822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.044 ms 00:23:23.552 [2024-09-30 22:05:08.050848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.054751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.054857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:23.552 [2024-09-30 22:05:08.054915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.825 ms 00:23:23.552 [2024-09-30 22:05:08.054938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.057751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.057850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:23.552 [2024-09-30 22:05:08.057914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.760 ms 00:23:23.552 [2024-09-30 22:05:08.057936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.060963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.061066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:23.552 [2024-09-30 22:05:08.061125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:23:23.552 [2024-09-30 22:05:08.061149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.061211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.061308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:23.552 [2024-09-30 22:05:08.061347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:23.552 [2024-09-30 22:05:08.061366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.061440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.552 [2024-09-30 22:05:08.061462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:23.552 [2024-09-30 22:05:08.061522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:23.552 [2024-09-30 22:05:08.061544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.552 [2024-09-30 22:05:08.062409] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2394.205 ms, result 0 00:23:23.552 { 00:23:23.552 "name": "ftl0", 00:23:23.552 "uuid": "e32ab2ae-aa10-4127-98b2-3339952528db" 00:23:23.552 } 00:23:23.552 22:05:08 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:23.552 22:05:08 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:23.552 22:05:08 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:23:23.552 22:05:08 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:23.812 [2024-09-30 22:05:08.463617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.812 [2024-09-30 22:05:08.463779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:23.812 [2024-09-30 22:05:08.463837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:23.812 [2024-09-30 22:05:08.463863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.812 [2024-09-30 22:05:08.463912] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:23.812 [2024-09-30 22:05:08.464403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.812 [2024-09-30 22:05:08.464496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:23.812 [2024-09-30 22:05:08.464550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.445 ms 00:23:23.812 [2024-09-30 22:05:08.464593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.812 [2024-09-30 22:05:08.464867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.812 [2024-09-30 22:05:08.464931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:23.812 [2024-09-30 22:05:08.464978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:23:23.812 [2024-09-30 22:05:08.465023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.812 [2024-09-30 22:05:08.468294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.812 [2024-09-30 22:05:08.468372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:23.812 [2024-09-30 22:05:08.468421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:23:23.812 [2024-09-30 22:05:08.468465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.812 [2024-09-30 22:05:08.474619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.812 [2024-09-30 22:05:08.474712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:23.812 [2024-09-30 22:05:08.474762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:23:23.812 [2024-09-30 22:05:08.474805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-09-30 22:05:08.476219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-09-30 22:05:08.476320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:23.813 [2024-09-30 22:05:08.476371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:23:23.813 [2024-09-30 22:05:08.476414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-09-30 22:05:08.480175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-09-30 22:05:08.480296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:23.813 [2024-09-30 22:05:08.480357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:23:23.813 [2024-09-30 22:05:08.480380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-09-30 22:05:08.480568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-09-30 22:05:08.480602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:23.813 [2024-09-30 22:05:08.480625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:23.813 [2024-09-30 22:05:08.480646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-09-30 22:05:08.482169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-09-30 22:05:08.482277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:23.813 [2024-09-30 22:05:08.482330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:23:23.813 [2024-09-30 22:05:08.482352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-09-30 22:05:08.483575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-09-30 22:05:08.483669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:23.813 [2024-09-30 22:05:08.483720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:23:23.813 [2024-09-30 22:05:08.483741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-09-30 22:05:08.484689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-09-30 22:05:08.484779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:23.813 [2024-09-30 22:05:08.484830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.904 ms 00:23:23.813 [2024-09-30 22:05:08.484851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-09-30 22:05:08.485895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.813 [2024-09-30 22:05:08.485987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:23.813 [2024-09-30 22:05:08.486039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:23:23.813 [2024-09-30 22:05:08.486061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.813 [2024-09-30 22:05:08.486100] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:23.813 [2024-09-30 22:05:08.486155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:23.813 [2024-09-30 22:05:08.486809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.486994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:23.814 [2024-09-30 22:05:08.487217] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:23.814 [2024-09-30 22:05:08.487228] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e32ab2ae-aa10-4127-98b2-3339952528db 00:23:23.814 [2024-09-30 22:05:08.487236] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:23.814 [2024-09-30 22:05:08.487244] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:23.814 [2024-09-30 22:05:08.487251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:23.814 [2024-09-30 22:05:08.487260] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:23.814 [2024-09-30 22:05:08.487268] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:23.814 [2024-09-30 22:05:08.487277] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:23.814 [2024-09-30 22:05:08.487284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:23.814 [2024-09-30 22:05:08.487292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:23.814 [2024-09-30 22:05:08.487298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:23.814 [2024-09-30 22:05:08.487307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.814 [2024-09-30 22:05:08.487316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:23.814 [2024-09-30 22:05:08.487326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.208 ms 00:23:23.814 [2024-09-30 22:05:08.487333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.814 [2024-09-30 22:05:08.488782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.814 [2024-09-30 22:05:08.488797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:23.814 [2024-09-30 22:05:08.488807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.427 ms 00:23:23.814 [2024-09-30 22:05:08.488814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.814 [2024-09-30 22:05:08.488902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.814 [2024-09-30 22:05:08.488911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:23.814 [2024-09-30 22:05:08.488920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:23.814 [2024-09-30 22:05:08.488927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.814 [2024-09-30 22:05:08.494142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.814 [2024-09-30 22:05:08.494292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:23.814 [2024-09-30 22:05:08.494309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.814 [2024-09-30 22:05:08.494317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.814 [2024-09-30 22:05:08.494370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.814 [2024-09-30 22:05:08.494378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:23.814 [2024-09-30 22:05:08.494388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.814 [2024-09-30 22:05:08.494395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.814 [2024-09-30 22:05:08.494454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.814 [2024-09-30 22:05:08.494464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:23.814 [2024-09-30 22:05:08.494474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.814 [2024-09-30 22:05:08.494481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.814 [2024-09-30 22:05:08.494500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.494507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:23.815 [2024-09-30 22:05:08.494516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.494523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.503216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.503250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:23.815 [2024-09-30 22:05:08.503262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.503269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.510839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.510876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:23.815 [2024-09-30 22:05:08.510887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.510894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.510957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.510967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:23.815 [2024-09-30 22:05:08.510976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.510983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.511017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.511026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:23.815 [2024-09-30 22:05:08.511037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.511045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.511113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.511121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:23.815 [2024-09-30 22:05:08.511133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.511140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.511173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.511182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:23.815 [2024-09-30 22:05:08.511416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.511437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.511491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.511523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:23.815 [2024-09-30 22:05:08.511544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.511563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.511650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.815 [2024-09-30 22:05:08.511740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:23.815 [2024-09-30 22:05:08.511795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.815 [2024-09-30 22:05:08.511817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.815 [2024-09-30 22:05:08.512041] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 48.391 ms, result 0 00:23:23.815 true 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 90616 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 90616 ']' 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 90616 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90616 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90616' 00:23:23.815 killing process with pid 90616 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 90616 00:23:23.815 22:05:08 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 90616 00:23:29.078 22:05:13 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:33.263 262144+0 records in 00:23:33.263 262144+0 records out 00:23:33.263 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.55198 s, 302 MB/s 00:23:33.263 22:05:17 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:34.638 22:05:19 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:34.638 [2024-09-30 22:05:19.349931] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:34.638 [2024-09-30 22:05:19.350029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90802 ] 00:23:34.895 [2024-09-30 22:05:19.472378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:34.895 [2024-09-30 22:05:19.489094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.895 [2024-09-30 22:05:19.523521] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.895 [2024-09-30 22:05:19.612592] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.895 [2024-09-30 22:05:19.612665] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.153 [2024-09-30 22:05:19.765962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.766125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:35.153 [2024-09-30 22:05:19.766143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:35.153 [2024-09-30 22:05:19.766153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.766224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.766239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:35.153 [2024-09-30 22:05:19.766247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:35.153 [2024-09-30 22:05:19.766255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.766277] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:35.153 [2024-09-30 22:05:19.766495] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:35.153 [2024-09-30 22:05:19.766508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.766518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:35.153 [2024-09-30 22:05:19.766526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:23:35.153 [2024-09-30 22:05:19.766539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.767612] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:35.153 [2024-09-30 22:05:19.769857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.769891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:35.153 [2024-09-30 22:05:19.769908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.246 ms 00:23:35.153 [2024-09-30 22:05:19.769915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.769965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.769978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:35.153 [2024-09-30 22:05:19.769988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:35.153 [2024-09-30 22:05:19.769997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.775111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.775141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:35.153 [2024-09-30 22:05:19.775157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.059 ms 00:23:35.153 [2024-09-30 22:05:19.775165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.775244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.775253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:35.153 [2024-09-30 22:05:19.775261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:35.153 [2024-09-30 22:05:19.775274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.775315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.775324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:35.153 [2024-09-30 22:05:19.775331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:35.153 [2024-09-30 22:05:19.775340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.775367] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:35.153 [2024-09-30 22:05:19.776714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.776839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:35.153 [2024-09-30 22:05:19.776853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.351 ms 00:23:35.153 [2024-09-30 22:05:19.776860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.776889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.153 [2024-09-30 22:05:19.776898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:35.153 [2024-09-30 22:05:19.776906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:35.153 [2024-09-30 22:05:19.776918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.153 [2024-09-30 22:05:19.776936] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:35.153 [2024-09-30 22:05:19.776955] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:35.153 [2024-09-30 22:05:19.776988] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:35.153 [2024-09-30 22:05:19.777007] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:35.153 [2024-09-30 22:05:19.777110] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:35.153 [2024-09-30 22:05:19.777120] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:35.154 [2024-09-30 22:05:19.777132] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:35.154 [2024-09-30 22:05:19.777141] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777150] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777161] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:35.154 [2024-09-30 22:05:19.777168] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:35.154 [2024-09-30 22:05:19.777175] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:35.154 [2024-09-30 22:05:19.777184] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:35.154 [2024-09-30 22:05:19.777213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.154 [2024-09-30 22:05:19.777220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:35.154 [2024-09-30 22:05:19.777228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:23:35.154 [2024-09-30 22:05:19.777234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.154 [2024-09-30 22:05:19.777323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.154 [2024-09-30 22:05:19.777331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:35.154 [2024-09-30 22:05:19.777338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:35.154 [2024-09-30 22:05:19.777345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.154 [2024-09-30 22:05:19.777446] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:35.154 [2024-09-30 22:05:19.777457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:35.154 [2024-09-30 22:05:19.777466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:35.154 [2024-09-30 22:05:19.777491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:35.154 [2024-09-30 22:05:19.777519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.154 [2024-09-30 22:05:19.777535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:35.154 [2024-09-30 22:05:19.777543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:35.154 [2024-09-30 22:05:19.777554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:35.154 [2024-09-30 22:05:19.777562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:35.154 [2024-09-30 22:05:19.777570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:35.154 [2024-09-30 22:05:19.777577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:35.154 [2024-09-30 22:05:19.777593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:35.154 [2024-09-30 22:05:19.777617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:35.154 [2024-09-30 22:05:19.777639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:35.154 [2024-09-30 22:05:19.777661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:35.154 [2024-09-30 22:05:19.777687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:35.154 [2024-09-30 22:05:19.777710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.154 [2024-09-30 22:05:19.777724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:35.154 [2024-09-30 22:05:19.777732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:35.154 [2024-09-30 22:05:19.777739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:35.154 [2024-09-30 22:05:19.777747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:35.154 [2024-09-30 22:05:19.777755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:35.154 [2024-09-30 22:05:19.777762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:35.154 [2024-09-30 22:05:19.777777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:35.154 [2024-09-30 22:05:19.777784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777792] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:35.154 [2024-09-30 22:05:19.777803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:35.154 [2024-09-30 22:05:19.777810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:35.154 [2024-09-30 22:05:19.777827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:35.154 [2024-09-30 22:05:19.777834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:35.154 [2024-09-30 22:05:19.777840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:35.154 [2024-09-30 22:05:19.777848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:35.154 [2024-09-30 22:05:19.777854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:35.154 [2024-09-30 22:05:19.777861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:35.154 [2024-09-30 22:05:19.777869] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:35.154 [2024-09-30 22:05:19.777878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.154 [2024-09-30 22:05:19.777889] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:35.154 [2024-09-30 22:05:19.777897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:35.154 [2024-09-30 22:05:19.777903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:35.154 [2024-09-30 22:05:19.777910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:35.154 [2024-09-30 22:05:19.777917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:35.154 [2024-09-30 22:05:19.777926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:35.154 [2024-09-30 22:05:19.777933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:35.154 [2024-09-30 22:05:19.777939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:35.154 [2024-09-30 22:05:19.777946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:35.154 [2024-09-30 22:05:19.777953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:35.154 [2024-09-30 22:05:19.777961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:35.154 [2024-09-30 22:05:19.777968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:35.154 [2024-09-30 22:05:19.777975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:35.154 [2024-09-30 22:05:19.777982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:35.154 [2024-09-30 22:05:19.777988] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:35.154 [2024-09-30 22:05:19.777996] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:35.154 [2024-09-30 22:05:19.778005] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:35.154 [2024-09-30 22:05:19.778012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:35.154 [2024-09-30 22:05:19.778019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:35.154 [2024-09-30 22:05:19.778026] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:35.154 [2024-09-30 22:05:19.778033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.154 [2024-09-30 22:05:19.778042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:35.154 [2024-09-30 22:05:19.778049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:23:35.154 [2024-09-30 22:05:19.778058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.154 [2024-09-30 22:05:19.796520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.154 [2024-09-30 22:05:19.796677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:35.154 [2024-09-30 22:05:19.796752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.422 ms 00:23:35.154 [2024-09-30 22:05:19.796860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.154 [2024-09-30 22:05:19.796986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.154 [2024-09-30 22:05:19.797094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:35.155 [2024-09-30 22:05:19.797125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:35.155 [2024-09-30 22:05:19.797155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.805958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.806072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:35.155 [2024-09-30 22:05:19.806123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.705 ms 00:23:35.155 [2024-09-30 22:05:19.806145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.806234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.806302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:35.155 [2024-09-30 22:05:19.806326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:35.155 [2024-09-30 22:05:19.806345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.806695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.806792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:35.155 [2024-09-30 22:05:19.806840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:23:35.155 [2024-09-30 22:05:19.806861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.806998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.807021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:35.155 [2024-09-30 22:05:19.807065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:35.155 [2024-09-30 22:05:19.807110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.811831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.811944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:35.155 [2024-09-30 22:05:19.811992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.686 ms 00:23:35.155 [2024-09-30 22:05:19.812014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.814398] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:35.155 [2024-09-30 22:05:19.814515] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:35.155 [2024-09-30 22:05:19.814581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.814602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:35.155 [2024-09-30 22:05:19.814621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.456 ms 00:23:35.155 [2024-09-30 22:05:19.814729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.829206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.829326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:35.155 [2024-09-30 22:05:19.829407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.433 ms 00:23:35.155 [2024-09-30 22:05:19.829692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.831469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.831582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:35.155 [2024-09-30 22:05:19.831637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.707 ms 00:23:35.155 [2024-09-30 22:05:19.831659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.833082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.833196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:35.155 [2024-09-30 22:05:19.833248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.383 ms 00:23:35.155 [2024-09-30 22:05:19.833269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.833588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.833668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:35.155 [2024-09-30 22:05:19.833717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:23:35.155 [2024-09-30 22:05:19.833744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.849485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.849615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:35.155 [2024-09-30 22:05:19.849668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.708 ms 00:23:35.155 [2024-09-30 22:05:19.849691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.857053] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:35.155 [2024-09-30 22:05:19.859452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.859554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:35.155 [2024-09-30 22:05:19.859612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.710 ms 00:23:35.155 [2024-09-30 22:05:19.859634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.859692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.859987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:35.155 [2024-09-30 22:05:19.860109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:35.155 [2024-09-30 22:05:19.860135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.860295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.860369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:35.155 [2024-09-30 22:05:19.860413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:35.155 [2024-09-30 22:05:19.860487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.860528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.860576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:35.155 [2024-09-30 22:05:19.860607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:35.155 [2024-09-30 22:05:19.860699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.860754] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:35.155 [2024-09-30 22:05:19.860778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.860797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:35.155 [2024-09-30 22:05:19.860851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:35.155 [2024-09-30 22:05:19.860861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.864161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.864286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:35.155 [2024-09-30 22:05:19.864301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.278 ms 00:23:35.155 [2024-09-30 22:05:19.864309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.864379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:35.155 [2024-09-30 22:05:19.864389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:35.155 [2024-09-30 22:05:19.864397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:35.155 [2024-09-30 22:05:19.864404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:35.155 [2024-09-30 22:05:19.865308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 98.951 ms, result 0 00:23:57.707  Copying: 45/1024 [MB] (45 MBps) Copying: 92/1024 [MB] (46 MBps) Copying: 136/1024 [MB] (44 MBps) Copying: 182/1024 [MB] (45 MBps) Copying: 228/1024 [MB] (46 MBps) Copying: 275/1024 [MB] (46 MBps) Copying: 323/1024 [MB] (48 MBps) Copying: 368/1024 [MB] (45 MBps) Copying: 414/1024 [MB] (45 MBps) Copying: 459/1024 [MB] (44 MBps) Copying: 504/1024 [MB] (45 MBps) Copying: 551/1024 [MB] (47 MBps) Copying: 597/1024 [MB] (45 MBps) Copying: 642/1024 [MB] (45 MBps) Copying: 688/1024 [MB] (45 MBps) Copying: 735/1024 [MB] (47 MBps) Copying: 781/1024 [MB] (45 MBps) Copying: 827/1024 [MB] (46 MBps) Copying: 872/1024 [MB] (45 MBps) Copying: 916/1024 [MB] (44 MBps) Copying: 962/1024 [MB] (45 MBps) Copying: 1008/1024 [MB] (46 MBps) Copying: 1024/1024 [MB] (average 45 MBps)[2024-09-30 22:05:42.216509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.707 [2024-09-30 22:05:42.216548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:57.707 [2024-09-30 22:05:42.216562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:57.707 [2024-09-30 22:05:42.216577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.707 [2024-09-30 22:05:42.216598] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:57.707 [2024-09-30 22:05:42.217018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.707 [2024-09-30 22:05:42.217033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:57.707 [2024-09-30 22:05:42.217041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:23:57.707 [2024-09-30 22:05:42.217049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.707 [2024-09-30 22:05:42.218533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.707 [2024-09-30 22:05:42.218564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:57.707 [2024-09-30 22:05:42.218573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.461 ms 00:23:57.707 [2024-09-30 22:05:42.218580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.707 [2024-09-30 22:05:42.218619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.707 [2024-09-30 22:05:42.218627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:23:57.707 [2024-09-30 22:05:42.218635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:57.708 [2024-09-30 22:05:42.218642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.708 [2024-09-30 22:05:42.218682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.708 [2024-09-30 22:05:42.218690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:23:57.708 [2024-09-30 22:05:42.218697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:57.708 [2024-09-30 22:05:42.218704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.708 [2024-09-30 22:05:42.218716] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:57.708 [2024-09-30 22:05:42.218729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.218994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:57.708 [2024-09-30 22:05:42.219350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:57.709 [2024-09-30 22:05:42.219479] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:57.709 [2024-09-30 22:05:42.219487] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e32ab2ae-aa10-4127-98b2-3339952528db 00:23:57.709 [2024-09-30 22:05:42.219494] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:57.709 [2024-09-30 22:05:42.219504] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:23:57.709 [2024-09-30 22:05:42.219511] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:57.709 [2024-09-30 22:05:42.219518] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:57.709 [2024-09-30 22:05:42.219524] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:57.709 [2024-09-30 22:05:42.219531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:57.709 [2024-09-30 22:05:42.219538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:57.709 [2024-09-30 22:05:42.219545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:57.709 [2024-09-30 22:05:42.219551] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:57.709 [2024-09-30 22:05:42.219557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.709 [2024-09-30 22:05:42.219564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:57.709 [2024-09-30 22:05:42.219572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:23:57.709 [2024-09-30 22:05:42.219580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.221207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.709 [2024-09-30 22:05:42.221230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:57.709 [2024-09-30 22:05:42.221241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.614 ms 00:23:57.709 [2024-09-30 22:05:42.221248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.221324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.709 [2024-09-30 22:05:42.221334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:57.709 [2024-09-30 22:05:42.221342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:57.709 [2024-09-30 22:05:42.221350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.225667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.225698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:57.709 [2024-09-30 22:05:42.225707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.225714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.225765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.225776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:57.709 [2024-09-30 22:05:42.225783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.225790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.225819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.225827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:57.709 [2024-09-30 22:05:42.225835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.225842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.225855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.225866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:57.709 [2024-09-30 22:05:42.225875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.225885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.234534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.234571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:57.709 [2024-09-30 22:05:42.234580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.234588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.241425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.241467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:57.709 [2024-09-30 22:05:42.241484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.241492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.241531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.241540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:57.709 [2024-09-30 22:05:42.241554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.241561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.241583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.241591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:57.709 [2024-09-30 22:05:42.241599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.241607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.241658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.241668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:57.709 [2024-09-30 22:05:42.241675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.241683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.241704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.241713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:57.709 [2024-09-30 22:05:42.241720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.241728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.241764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.241773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:57.709 [2024-09-30 22:05:42.241781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.241787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.241827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:57.709 [2024-09-30 22:05:42.241836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:57.709 [2024-09-30 22:05:42.241844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:57.709 [2024-09-30 22:05:42.241854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.709 [2024-09-30 22:05:42.241959] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 25.427 ms, result 0 00:23:58.644 00:23:58.644 00:23:58.644 22:05:43 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:58.644 [2024-09-30 22:05:43.245721] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:23:58.644 [2024-09-30 22:05:43.245931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91048 ] 00:23:58.644 [2024-09-30 22:05:43.373067] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:58.644 [2024-09-30 22:05:43.395535] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.644 [2024-09-30 22:05:43.429225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.904 [2024-09-30 22:05:43.516439] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.904 [2024-09-30 22:05:43.516511] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.904 [2024-09-30 22:05:43.670043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.670214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:58.904 [2024-09-30 22:05:43.670233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:58.904 [2024-09-30 22:05:43.670242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.670298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.670308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:58.904 [2024-09-30 22:05:43.670317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:58.904 [2024-09-30 22:05:43.670324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.670345] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:58.904 [2024-09-30 22:05:43.670566] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:58.904 [2024-09-30 22:05:43.670580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.670589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:58.904 [2024-09-30 22:05:43.670598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:23:58.904 [2024-09-30 22:05:43.670607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.670853] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:23:58.904 [2024-09-30 22:05:43.670874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.670883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:58.904 [2024-09-30 22:05:43.670892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:58.904 [2024-09-30 22:05:43.670903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.670949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.670961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:58.904 [2024-09-30 22:05:43.670969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:58.904 [2024-09-30 22:05:43.670978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.671265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.671276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:58.904 [2024-09-30 22:05:43.671286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:23:58.904 [2024-09-30 22:05:43.671293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.671356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.671365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:58.904 [2024-09-30 22:05:43.671372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:58.904 [2024-09-30 22:05:43.671379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.671400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.671408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:58.904 [2024-09-30 22:05:43.671415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:58.904 [2024-09-30 22:05:43.671422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.671438] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:58.904 [2024-09-30 22:05:43.672855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.672886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:58.904 [2024-09-30 22:05:43.672896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.420 ms 00:23:58.904 [2024-09-30 22:05:43.672902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.672933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.672941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:58.904 [2024-09-30 22:05:43.672949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:58.904 [2024-09-30 22:05:43.672960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.672977] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:58.904 [2024-09-30 22:05:43.673000] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:58.904 [2024-09-30 22:05:43.673032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:58.904 [2024-09-30 22:05:43.673046] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:58.904 [2024-09-30 22:05:43.673146] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:58.904 [2024-09-30 22:05:43.673156] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:58.904 [2024-09-30 22:05:43.673165] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:58.904 [2024-09-30 22:05:43.673175] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:58.904 [2024-09-30 22:05:43.673203] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:58.904 [2024-09-30 22:05:43.673214] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:58.904 [2024-09-30 22:05:43.673222] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:58.904 [2024-09-30 22:05:43.673230] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:58.904 [2024-09-30 22:05:43.673251] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:58.904 [2024-09-30 22:05:43.673258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.673270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:58.904 [2024-09-30 22:05:43.673278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:23:58.904 [2024-09-30 22:05:43.673285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.673368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.904 [2024-09-30 22:05:43.673377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:58.904 [2024-09-30 22:05:43.673386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:58.904 [2024-09-30 22:05:43.673393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.904 [2024-09-30 22:05:43.673498] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:58.904 [2024-09-30 22:05:43.673509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:58.904 [2024-09-30 22:05:43.673518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.904 [2024-09-30 22:05:43.673528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.904 [2024-09-30 22:05:43.673537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:58.904 [2024-09-30 22:05:43.673544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:58.904 [2024-09-30 22:05:43.673552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:58.904 [2024-09-30 22:05:43.673559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:58.904 [2024-09-30 22:05:43.673568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:58.904 [2024-09-30 22:05:43.673575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.904 [2024-09-30 22:05:43.673587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:58.904 [2024-09-30 22:05:43.673595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:58.904 [2024-09-30 22:05:43.673603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:58.904 [2024-09-30 22:05:43.673611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:58.904 [2024-09-30 22:05:43.673619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:58.905 [2024-09-30 22:05:43.673626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:58.905 [2024-09-30 22:05:43.673641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:58.905 [2024-09-30 22:05:43.673648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:58.905 [2024-09-30 22:05:43.673665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.905 [2024-09-30 22:05:43.673680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:58.905 [2024-09-30 22:05:43.673687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.905 [2024-09-30 22:05:43.673701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:58.905 [2024-09-30 22:05:43.673709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.905 [2024-09-30 22:05:43.673723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:58.905 [2024-09-30 22:05:43.673731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:58.905 [2024-09-30 22:05:43.673745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:58.905 [2024-09-30 22:05:43.673752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.905 [2024-09-30 22:05:43.673766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:58.905 [2024-09-30 22:05:43.673778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:58.905 [2024-09-30 22:05:43.673786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:58.905 [2024-09-30 22:05:43.673793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:58.905 [2024-09-30 22:05:43.673800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:58.905 [2024-09-30 22:05:43.673807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:58.905 [2024-09-30 22:05:43.673823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:58.905 [2024-09-30 22:05:43.673830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673838] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:58.905 [2024-09-30 22:05:43.673846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:58.905 [2024-09-30 22:05:43.673854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:58.905 [2024-09-30 22:05:43.673865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:58.905 [2024-09-30 22:05:43.673873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:58.905 [2024-09-30 22:05:43.673880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:58.905 [2024-09-30 22:05:43.673888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:58.905 [2024-09-30 22:05:43.673895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:58.905 [2024-09-30 22:05:43.673905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:58.905 [2024-09-30 22:05:43.673912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:58.905 [2024-09-30 22:05:43.673921] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:58.905 [2024-09-30 22:05:43.673933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.905 [2024-09-30 22:05:43.673941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:58.905 [2024-09-30 22:05:43.673948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:58.905 [2024-09-30 22:05:43.673955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:58.905 [2024-09-30 22:05:43.673961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:58.905 [2024-09-30 22:05:43.673968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:58.905 [2024-09-30 22:05:43.673975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:58.905 [2024-09-30 22:05:43.673981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:58.905 [2024-09-30 22:05:43.673988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:58.905 [2024-09-30 22:05:43.673994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:58.905 [2024-09-30 22:05:43.674001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:58.905 [2024-09-30 22:05:43.674008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:58.905 [2024-09-30 22:05:43.674015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:58.905 [2024-09-30 22:05:43.674024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:58.905 [2024-09-30 22:05:43.674031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:58.905 [2024-09-30 22:05:43.674037] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:58.905 [2024-09-30 22:05:43.674045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:58.905 [2024-09-30 22:05:43.674052] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:58.905 [2024-09-30 22:05:43.674060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:58.905 [2024-09-30 22:05:43.674067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:58.905 [2024-09-30 22:05:43.674074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:58.905 [2024-09-30 22:05:43.674081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.674088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:58.905 [2024-09-30 22:05:43.674095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:23:58.905 [2024-09-30 22:05:43.674106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.905 [2024-09-30 22:05:43.688647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.688735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:58.905 [2024-09-30 22:05:43.688770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.486 ms 00:23:58.905 [2024-09-30 22:05:43.688817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.905 [2024-09-30 22:05:43.688970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.688985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:58.905 [2024-09-30 22:05:43.689000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:58.905 [2024-09-30 22:05:43.689012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.905 [2024-09-30 22:05:43.699918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.699966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.905 [2024-09-30 22:05:43.699983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.790 ms 00:23:58.905 [2024-09-30 22:05:43.699995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.905 [2024-09-30 22:05:43.700047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.700062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.905 [2024-09-30 22:05:43.700081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:58.905 [2024-09-30 22:05:43.700093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.905 [2024-09-30 22:05:43.700266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.700289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.905 [2024-09-30 22:05:43.700304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:23:58.905 [2024-09-30 22:05:43.700316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.905 [2024-09-30 22:05:43.700453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.700463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.905 [2024-09-30 22:05:43.700472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:23:58.905 [2024-09-30 22:05:43.700480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.905 [2024-09-30 22:05:43.704888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.704917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.905 [2024-09-30 22:05:43.704930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.391 ms 00:23:58.905 [2024-09-30 22:05:43.704938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.905 [2024-09-30 22:05:43.705027] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:58.905 [2024-09-30 22:05:43.705039] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:58.905 [2024-09-30 22:05:43.705052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.905 [2024-09-30 22:05:43.705059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:58.905 [2024-09-30 22:05:43.705067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:58.905 [2024-09-30 22:05:43.705076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.717498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.717524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.164 [2024-09-30 22:05:43.717540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.408 ms 00:23:59.164 [2024-09-30 22:05:43.717549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.717655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.717663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.164 [2024-09-30 22:05:43.717674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:23:59.164 [2024-09-30 22:05:43.717680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.717718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.717730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.164 [2024-09-30 22:05:43.717737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:23:59.164 [2024-09-30 22:05:43.717744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.718031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.718040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.164 [2024-09-30 22:05:43.718051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:23:59.164 [2024-09-30 22:05:43.718065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.718078] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:23:59.164 [2024-09-30 22:05:43.718087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.718097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.164 [2024-09-30 22:05:43.718104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:59.164 [2024-09-30 22:05:43.718111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.726037] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.164 [2024-09-30 22:05:43.726152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.726166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.164 [2024-09-30 22:05:43.726175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.027 ms 00:23:59.164 [2024-09-30 22:05:43.726196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.728556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.728680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.164 [2024-09-30 22:05:43.728694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.341 ms 00:23:59.164 [2024-09-30 22:05:43.728701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.728771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.728781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.164 [2024-09-30 22:05:43.728789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:59.164 [2024-09-30 22:05:43.728798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.728834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.728843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.164 [2024-09-30 22:05:43.728851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:59.164 [2024-09-30 22:05:43.728863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.728889] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.164 [2024-09-30 22:05:43.728900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.728907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.164 [2024-09-30 22:05:43.728914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:59.164 [2024-09-30 22:05:43.728921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.732468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.732499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.164 [2024-09-30 22:05:43.732508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:23:59.164 [2024-09-30 22:05:43.732515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.732578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.164 [2024-09-30 22:05:43.732588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.164 [2024-09-30 22:05:43.732596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:59.164 [2024-09-30 22:05:43.732607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.164 [2024-09-30 22:05:43.733426] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 62.996 ms, result 0 00:24:20.086  Copying: 51/1024 [MB] (51 MBps) Copying: 102/1024 [MB] (51 MBps) Copying: 154/1024 [MB] (51 MBps) Copying: 206/1024 [MB] (52 MBps) Copying: 256/1024 [MB] (50 MBps) Copying: 307/1024 [MB] (50 MBps) Copying: 356/1024 [MB] (49 MBps) Copying: 403/1024 [MB] (47 MBps) Copying: 452/1024 [MB] (48 MBps) Copying: 502/1024 [MB] (50 MBps) Copying: 549/1024 [MB] (47 MBps) Copying: 600/1024 [MB] (50 MBps) Copying: 648/1024 [MB] (48 MBps) Copying: 700/1024 [MB] (51 MBps) Copying: 751/1024 [MB] (51 MBps) Copying: 797/1024 [MB] (46 MBps) Copying: 846/1024 [MB] (48 MBps) Copying: 894/1024 [MB] (48 MBps) Copying: 941/1024 [MB] (47 MBps) Copying: 988/1024 [MB] (46 MBps) Copying: 1024/1024 [MB] (average 49 MBps)[2024-09-30 22:06:04.791122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.086 [2024-09-30 22:06:04.791209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:20.086 [2024-09-30 22:06:04.791225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:20.086 [2024-09-30 22:06:04.791234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.086 [2024-09-30 22:06:04.791261] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:20.086 [2024-09-30 22:06:04.791859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.086 [2024-09-30 22:06:04.791887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:20.086 [2024-09-30 22:06:04.791897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:24:20.086 [2024-09-30 22:06:04.791905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.086 [2024-09-30 22:06:04.792132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.086 [2024-09-30 22:06:04.792143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:20.086 [2024-09-30 22:06:04.792161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:24:20.086 [2024-09-30 22:06:04.792169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.086 [2024-09-30 22:06:04.792221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.086 [2024-09-30 22:06:04.792232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:24:20.086 [2024-09-30 22:06:04.792242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:20.087 [2024-09-30 22:06:04.792251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.087 [2024-09-30 22:06:04.792307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.087 [2024-09-30 22:06:04.792317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:24:20.087 [2024-09-30 22:06:04.792327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:20.087 [2024-09-30 22:06:04.792335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.087 [2024-09-30 22:06:04.792354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:20.087 [2024-09-30 22:06:04.792368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.792994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.793001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.793009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.793018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.793025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:20.087 [2024-09-30 22:06:04.793033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:20.088 [2024-09-30 22:06:04.793172] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:20.088 [2024-09-30 22:06:04.793180] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e32ab2ae-aa10-4127-98b2-3339952528db 00:24:20.088 [2024-09-30 22:06:04.793202] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:20.088 [2024-09-30 22:06:04.793210] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:24:20.088 [2024-09-30 22:06:04.793222] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:20.088 [2024-09-30 22:06:04.793230] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:20.088 [2024-09-30 22:06:04.793237] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:20.088 [2024-09-30 22:06:04.793245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:20.088 [2024-09-30 22:06:04.793252] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:20.088 [2024-09-30 22:06:04.793264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:20.088 [2024-09-30 22:06:04.793270] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:20.088 [2024-09-30 22:06:04.793278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.088 [2024-09-30 22:06:04.793285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:20.088 [2024-09-30 22:06:04.793293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.925 ms 00:24:20.088 [2024-09-30 22:06:04.793304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.795179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.088 [2024-09-30 22:06:04.795220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:20.088 [2024-09-30 22:06:04.795231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.860 ms 00:24:20.088 [2024-09-30 22:06:04.795240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.795345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:20.088 [2024-09-30 22:06:04.795355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:20.088 [2024-09-30 22:06:04.795370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:20.088 [2024-09-30 22:06:04.795378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.801531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.801561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:20.088 [2024-09-30 22:06:04.801572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.801582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.801637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.801645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:20.088 [2024-09-30 22:06:04.801654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.801661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.801707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.801716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:20.088 [2024-09-30 22:06:04.801722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.801728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.801744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.801750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:20.088 [2024-09-30 22:06:04.801759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.801769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.812983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.813133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:20.088 [2024-09-30 22:06:04.813182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.813241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.822447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.822581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:20.088 [2024-09-30 22:06:04.822626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.822653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.822724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.822751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:20.088 [2024-09-30 22:06:04.822830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.822849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.822883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.822945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:20.088 [2024-09-30 22:06:04.822963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.822978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.823046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.823168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:20.088 [2024-09-30 22:06:04.823210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.823282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.823317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.823325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:20.088 [2024-09-30 22:06:04.823332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.823339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.823388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.823400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:20.088 [2024-09-30 22:06:04.823416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.823422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.823461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:20.088 [2024-09-30 22:06:04.823473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:20.088 [2024-09-30 22:06:04.823480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:20.088 [2024-09-30 22:06:04.823486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:20.088 [2024-09-30 22:06:04.823591] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 32.448 ms, result 0 00:24:20.348 00:24:20.348 00:24:20.348 22:06:05 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:22.900 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:22.900 22:06:07 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:22.900 [2024-09-30 22:06:07.184060] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:22.900 [2024-09-30 22:06:07.184184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91305 ] 00:24:22.900 [2024-09-30 22:06:07.311225] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:22.900 [2024-09-30 22:06:07.332549] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.900 [2024-09-30 22:06:07.366444] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.900 [2024-09-30 22:06:07.454895] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:22.900 [2024-09-30 22:06:07.454959] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:22.900 [2024-09-30 22:06:07.608670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.608848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:22.900 [2024-09-30 22:06:07.608867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.900 [2024-09-30 22:06:07.608875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.608929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.608939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.900 [2024-09-30 22:06:07.608947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:22.900 [2024-09-30 22:06:07.608957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.608978] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:22.900 [2024-09-30 22:06:07.609212] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:22.900 [2024-09-30 22:06:07.609230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.609241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.900 [2024-09-30 22:06:07.609249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:24:22.900 [2024-09-30 22:06:07.609258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.609510] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:24:22.900 [2024-09-30 22:06:07.609531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.609540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:22.900 [2024-09-30 22:06:07.609553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:22.900 [2024-09-30 22:06:07.609564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.609608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.609618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:22.900 [2024-09-30 22:06:07.609627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:22.900 [2024-09-30 22:06:07.609635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.609860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.609877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.900 [2024-09-30 22:06:07.609886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:24:22.900 [2024-09-30 22:06:07.609894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.609958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.609967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.900 [2024-09-30 22:06:07.609974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:22.900 [2024-09-30 22:06:07.609981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.610005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.610014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:22.900 [2024-09-30 22:06:07.610021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:22.900 [2024-09-30 22:06:07.610028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.610043] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:22.900 [2024-09-30 22:06:07.611515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.611539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.900 [2024-09-30 22:06:07.611548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.476 ms 00:24:22.900 [2024-09-30 22:06:07.611556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.611586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.611594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:22.900 [2024-09-30 22:06:07.611601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:22.900 [2024-09-30 22:06:07.611608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.611624] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:22.900 [2024-09-30 22:06:07.611642] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:22.900 [2024-09-30 22:06:07.611685] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:22.900 [2024-09-30 22:06:07.611699] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:22.900 [2024-09-30 22:06:07.611798] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:22.900 [2024-09-30 22:06:07.611808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:22.900 [2024-09-30 22:06:07.611817] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:22.900 [2024-09-30 22:06:07.611831] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:22.900 [2024-09-30 22:06:07.611859] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:22.900 [2024-09-30 22:06:07.611872] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:22.900 [2024-09-30 22:06:07.611879] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:22.900 [2024-09-30 22:06:07.611886] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:22.900 [2024-09-30 22:06:07.611893] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:22.900 [2024-09-30 22:06:07.611900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.611906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:22.900 [2024-09-30 22:06:07.611917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:24:22.900 [2024-09-30 22:06:07.611924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.612008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.900 [2024-09-30 22:06:07.612016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:22.900 [2024-09-30 22:06:07.612028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:22.900 [2024-09-30 22:06:07.612035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.900 [2024-09-30 22:06:07.612140] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:22.900 [2024-09-30 22:06:07.612150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:22.900 [2024-09-30 22:06:07.612158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.900 [2024-09-30 22:06:07.612169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.900 [2024-09-30 22:06:07.612178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:22.900 [2024-09-30 22:06:07.612325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:22.900 [2024-09-30 22:06:07.612360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:22.900 [2024-09-30 22:06:07.612382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:22.900 [2024-09-30 22:06:07.612402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:22.900 [2024-09-30 22:06:07.612422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.900 [2024-09-30 22:06:07.612449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:22.900 [2024-09-30 22:06:07.612514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:22.900 [2024-09-30 22:06:07.612536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:22.900 [2024-09-30 22:06:07.612555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:22.901 [2024-09-30 22:06:07.612572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:22.901 [2024-09-30 22:06:07.612591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.901 [2024-09-30 22:06:07.612610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:22.901 [2024-09-30 22:06:07.612627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:22.901 [2024-09-30 22:06:07.612700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.901 [2024-09-30 22:06:07.612722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:22.901 [2024-09-30 22:06:07.612740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:22.901 [2024-09-30 22:06:07.612757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.901 [2024-09-30 22:06:07.612775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:22.901 [2024-09-30 22:06:07.612792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:22.901 [2024-09-30 22:06:07.612837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.901 [2024-09-30 22:06:07.612881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:22.901 [2024-09-30 22:06:07.612902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:22.901 [2024-09-30 22:06:07.612967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.901 [2024-09-30 22:06:07.612990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:22.901 [2024-09-30 22:06:07.613008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:22.901 [2024-09-30 22:06:07.613026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:22.901 [2024-09-30 22:06:07.613043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:22.901 [2024-09-30 22:06:07.613061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:22.901 [2024-09-30 22:06:07.613113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.901 [2024-09-30 22:06:07.613134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:22.901 [2024-09-30 22:06:07.613157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:22.901 [2024-09-30 22:06:07.613175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:22.901 [2024-09-30 22:06:07.613206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:22.901 [2024-09-30 22:06:07.613258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:22.901 [2024-09-30 22:06:07.613267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.901 [2024-09-30 22:06:07.613274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:22.901 [2024-09-30 22:06:07.613280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:22.901 [2024-09-30 22:06:07.613287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.901 [2024-09-30 22:06:07.613294] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:22.901 [2024-09-30 22:06:07.613301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:22.901 [2024-09-30 22:06:07.613309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:22.901 [2024-09-30 22:06:07.613321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:22.901 [2024-09-30 22:06:07.613331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:22.901 [2024-09-30 22:06:07.613338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:22.901 [2024-09-30 22:06:07.613345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:22.901 [2024-09-30 22:06:07.613351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:22.901 [2024-09-30 22:06:07.613360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:22.901 [2024-09-30 22:06:07.613366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:22.901 [2024-09-30 22:06:07.613375] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:22.901 [2024-09-30 22:06:07.613384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.901 [2024-09-30 22:06:07.613392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:22.901 [2024-09-30 22:06:07.613399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:22.901 [2024-09-30 22:06:07.613406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:22.901 [2024-09-30 22:06:07.613413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:22.901 [2024-09-30 22:06:07.613420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:22.901 [2024-09-30 22:06:07.613427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:22.901 [2024-09-30 22:06:07.613434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:22.901 [2024-09-30 22:06:07.613441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:22.901 [2024-09-30 22:06:07.613448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:22.901 [2024-09-30 22:06:07.613456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:22.901 [2024-09-30 22:06:07.613462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:22.901 [2024-09-30 22:06:07.613469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:22.901 [2024-09-30 22:06:07.613478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:22.901 [2024-09-30 22:06:07.613486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:22.901 [2024-09-30 22:06:07.613493] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:22.901 [2024-09-30 22:06:07.613504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:22.901 [2024-09-30 22:06:07.613512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:22.901 [2024-09-30 22:06:07.613519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:22.901 [2024-09-30 22:06:07.613526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:22.901 [2024-09-30 22:06:07.613534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:22.901 [2024-09-30 22:06:07.613542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.901 [2024-09-30 22:06:07.613552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:22.901 [2024-09-30 22:06:07.613559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:24:22.901 [2024-09-30 22:06:07.613566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.901 [2024-09-30 22:06:07.630278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.901 [2024-09-30 22:06:07.630319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.901 [2024-09-30 22:06:07.630336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.651 ms 00:24:22.901 [2024-09-30 22:06:07.630344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.901 [2024-09-30 22:06:07.630434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.901 [2024-09-30 22:06:07.630443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:22.901 [2024-09-30 22:06:07.630451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:22.901 [2024-09-30 22:06:07.630458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.901 [2024-09-30 22:06:07.638777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.901 [2024-09-30 22:06:07.638815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.901 [2024-09-30 22:06:07.638826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.259 ms 00:24:22.901 [2024-09-30 22:06:07.638834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.901 [2024-09-30 22:06:07.638866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.901 [2024-09-30 22:06:07.638875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.901 [2024-09-30 22:06:07.638884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:22.901 [2024-09-30 22:06:07.638892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.901 [2024-09-30 22:06:07.638961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.901 [2024-09-30 22:06:07.638972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.901 [2024-09-30 22:06:07.638983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:22.902 [2024-09-30 22:06:07.638991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.639111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.639120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.902 [2024-09-30 22:06:07.639128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:22.902 [2024-09-30 22:06:07.639139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.643907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.644051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.902 [2024-09-30 22:06:07.644067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:24:22.902 [2024-09-30 22:06:07.644082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.644237] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:22.902 [2024-09-30 22:06:07.644257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:22.902 [2024-09-30 22:06:07.644267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.644276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:22.902 [2024-09-30 22:06:07.644285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:22.902 [2024-09-30 22:06:07.644292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.656971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.656998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:22.902 [2024-09-30 22:06:07.657014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.660 ms 00:24:22.902 [2024-09-30 22:06:07.657024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.657139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.657147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:22.902 [2024-09-30 22:06:07.657155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:24:22.902 [2024-09-30 22:06:07.657164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.657227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.657237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:22.902 [2024-09-30 22:06:07.657248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:24:22.902 [2024-09-30 22:06:07.657255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.657546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.657560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:22.902 [2024-09-30 22:06:07.657568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:24:22.902 [2024-09-30 22:06:07.657577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.657591] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:24:22.902 [2024-09-30 22:06:07.657600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.657607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:22.902 [2024-09-30 22:06:07.657617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:22.902 [2024-09-30 22:06:07.657624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.665567] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:22.902 [2024-09-30 22:06:07.665772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.665785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:22.902 [2024-09-30 22:06:07.665794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.132 ms 00:24:22.902 [2024-09-30 22:06:07.665801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.668107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.668132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:22.902 [2024-09-30 22:06:07.668142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.285 ms 00:24:22.902 [2024-09-30 22:06:07.668150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.668224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.668234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:22.902 [2024-09-30 22:06:07.668242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:22.902 [2024-09-30 22:06:07.668253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.668287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.668296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:22.902 [2024-09-30 22:06:07.668303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.902 [2024-09-30 22:06:07.668310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.668339] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:22.902 [2024-09-30 22:06:07.668353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.668360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:22.902 [2024-09-30 22:06:07.668367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:22.902 [2024-09-30 22:06:07.668374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.671932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.671969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:22.902 [2024-09-30 22:06:07.671980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.542 ms 00:24:22.902 [2024-09-30 22:06:07.671988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.672052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.902 [2024-09-30 22:06:07.672065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:22.902 [2024-09-30 22:06:07.672076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:22.902 [2024-09-30 22:06:07.672083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.902 [2024-09-30 22:06:07.672974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 63.953 ms, result 0 00:24:46.273  Copying: 44/1024 [MB] (44 MBps) Copying: 89/1024 [MB] (44 MBps) Copying: 133/1024 [MB] (44 MBps) Copying: 179/1024 [MB] (46 MBps) Copying: 225/1024 [MB] (46 MBps) Copying: 270/1024 [MB] (44 MBps) Copying: 315/1024 [MB] (45 MBps) Copying: 363/1024 [MB] (47 MBps) Copying: 412/1024 [MB] (49 MBps) Copying: 459/1024 [MB] (46 MBps) Copying: 505/1024 [MB] (46 MBps) Copying: 552/1024 [MB] (46 MBps) Copying: 597/1024 [MB] (45 MBps) Copying: 641/1024 [MB] (44 MBps) Copying: 685/1024 [MB] (44 MBps) Copying: 730/1024 [MB] (44 MBps) Copying: 775/1024 [MB] (45 MBps) Copying: 822/1024 [MB] (46 MBps) Copying: 867/1024 [MB] (45 MBps) Copying: 912/1024 [MB] (45 MBps) Copying: 959/1024 [MB] (46 MBps) Copying: 1004/1024 [MB] (45 MBps) Copying: 1023/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 43 MBps)[2024-09-30 22:06:31.026002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-09-30 22:06:31.026056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:46.273 [2024-09-30 22:06:31.026071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:46.273 [2024-09-30 22:06:31.026081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-09-30 22:06:31.027369] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:46.273 [2024-09-30 22:06:31.029904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-09-30 22:06:31.029935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:46.273 [2024-09-30 22:06:31.029945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.509 ms 00:24:46.273 [2024-09-30 22:06:31.029953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-09-30 22:06:31.039759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-09-30 22:06:31.039797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:46.273 [2024-09-30 22:06:31.039807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.502 ms 00:24:46.273 [2024-09-30 22:06:31.039814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-09-30 22:06:31.039854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-09-30 22:06:31.039862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:24:46.273 [2024-09-30 22:06:31.039871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:46.273 [2024-09-30 22:06:31.039878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.273 [2024-09-30 22:06:31.039921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.273 [2024-09-30 22:06:31.039930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:24:46.274 [2024-09-30 22:06:31.039940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:46.274 [2024-09-30 22:06:31.039947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.274 [2024-09-30 22:06:31.039960] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:46.274 [2024-09-30 22:06:31.039971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129792 / 261120 wr_cnt: 1 state: open 00:24:46.274 [2024-09-30 22:06:31.039983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.039991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.039998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:46.274 [2024-09-30 22:06:31.040321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:46.275 [2024-09-30 22:06:31.040743] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:46.275 [2024-09-30 22:06:31.040751] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e32ab2ae-aa10-4127-98b2-3339952528db 00:24:46.275 [2024-09-30 22:06:31.040758] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129792 00:24:46.275 [2024-09-30 22:06:31.040766] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129824 00:24:46.275 [2024-09-30 22:06:31.040772] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129792 00:24:46.275 [2024-09-30 22:06:31.040780] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:24:46.275 [2024-09-30 22:06:31.040787] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:46.275 [2024-09-30 22:06:31.040797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:46.275 [2024-09-30 22:06:31.040804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:46.276 [2024-09-30 22:06:31.040810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:46.276 [2024-09-30 22:06:31.040816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:46.276 [2024-09-30 22:06:31.040824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.276 [2024-09-30 22:06:31.040831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:46.276 [2024-09-30 22:06:31.040839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:24:46.276 [2024-09-30 22:06:31.040846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.042271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.276 [2024-09-30 22:06:31.042293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:46.276 [2024-09-30 22:06:31.042301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.412 ms 00:24:46.276 [2024-09-30 22:06:31.042311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.042384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.276 [2024-09-30 22:06:31.042395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:46.276 [2024-09-30 22:06:31.042403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:46.276 [2024-09-30 22:06:31.042410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.046892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.276 [2024-09-30 22:06:31.046995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.276 [2024-09-30 22:06:31.047047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.276 [2024-09-30 22:06:31.047070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.047133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.276 [2024-09-30 22:06:31.047182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.276 [2024-09-30 22:06:31.047215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.276 [2024-09-30 22:06:31.047234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.047278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.276 [2024-09-30 22:06:31.047301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.276 [2024-09-30 22:06:31.047324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.276 [2024-09-30 22:06:31.047345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.047415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.276 [2024-09-30 22:06:31.047435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.276 [2024-09-30 22:06:31.047454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.276 [2024-09-30 22:06:31.047472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.055882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.276 [2024-09-30 22:06:31.056034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.276 [2024-09-30 22:06:31.056091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.276 [2024-09-30 22:06:31.056163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.063737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.276 [2024-09-30 22:06:31.063966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.276 [2024-09-30 22:06:31.064018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.276 [2024-09-30 22:06:31.064041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.064115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.276 [2024-09-30 22:06:31.064163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.276 [2024-09-30 22:06:31.064269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.276 [2024-09-30 22:06:31.064328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.276 [2024-09-30 22:06:31.064374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.276 [2024-09-30 22:06:31.064420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.276 [2024-09-30 22:06:31.064443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.276 [2024-09-30 22:06:31.064483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.278 [2024-09-30 22:06:31.064544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.278 [2024-09-30 22:06:31.064647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.278 [2024-09-30 22:06:31.064670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.278 [2024-09-30 22:06:31.064689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.278 [2024-09-30 22:06:31.064730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.278 [2024-09-30 22:06:31.064782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:46.278 [2024-09-30 22:06:31.064804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.278 [2024-09-30 22:06:31.064822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.278 [2024-09-30 22:06:31.064866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.278 [2024-09-30 22:06:31.064921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.278 [2024-09-30 22:06:31.064947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.278 [2024-09-30 22:06:31.064965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.278 [2024-09-30 22:06:31.065042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:46.278 [2024-09-30 22:06:31.065098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.278 [2024-09-30 22:06:31.065141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:46.278 [2024-09-30 22:06:31.065162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.278 [2024-09-30 22:06:31.065306] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 42.131 ms, result 0 00:24:48.184 00:24:48.184 00:24:48.442 22:06:33 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:48.442 [2024-09-30 22:06:33.062460] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:24:48.442 [2024-09-30 22:06:33.062699] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91563 ] 00:24:48.442 [2024-09-30 22:06:33.190371] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:48.442 [2024-09-30 22:06:33.212440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.442 [2024-09-30 22:06:33.245844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.701 [2024-09-30 22:06:33.333171] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:48.701 [2024-09-30 22:06:33.333244] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:48.701 [2024-09-30 22:06:33.486596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.701 [2024-09-30 22:06:33.486761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:48.701 [2024-09-30 22:06:33.486780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:48.701 [2024-09-30 22:06:33.486789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.701 [2024-09-30 22:06:33.486842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.701 [2024-09-30 22:06:33.486854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:48.701 [2024-09-30 22:06:33.486863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:48.701 [2024-09-30 22:06:33.486870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.701 [2024-09-30 22:06:33.486890] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:48.701 [2024-09-30 22:06:33.487105] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:48.701 [2024-09-30 22:06:33.487118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.701 [2024-09-30 22:06:33.487125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:48.701 [2024-09-30 22:06:33.487133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:24:48.701 [2024-09-30 22:06:33.487144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.701 [2024-09-30 22:06:33.487421] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:24:48.701 [2024-09-30 22:06:33.487445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.701 [2024-09-30 22:06:33.487454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:48.701 [2024-09-30 22:06:33.487464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:48.701 [2024-09-30 22:06:33.487475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.701 [2024-09-30 22:06:33.487517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.701 [2024-09-30 22:06:33.487529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:48.701 [2024-09-30 22:06:33.487542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:48.701 [2024-09-30 22:06:33.487551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.701 [2024-09-30 22:06:33.487774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.702 [2024-09-30 22:06:33.487785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:48.702 [2024-09-30 22:06:33.487796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:24:48.702 [2024-09-30 22:06:33.487804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.702 [2024-09-30 22:06:33.487881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.702 [2024-09-30 22:06:33.487890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:48.702 [2024-09-30 22:06:33.487898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:48.702 [2024-09-30 22:06:33.487905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.702 [2024-09-30 22:06:33.487927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.702 [2024-09-30 22:06:33.487935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:48.702 [2024-09-30 22:06:33.487942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:48.702 [2024-09-30 22:06:33.487949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.702 [2024-09-30 22:06:33.487964] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:48.702 [2024-09-30 22:06:33.489458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.702 [2024-09-30 22:06:33.489486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:48.702 [2024-09-30 22:06:33.489495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.497 ms 00:24:48.702 [2024-09-30 22:06:33.489502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.702 [2024-09-30 22:06:33.489538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.702 [2024-09-30 22:06:33.489546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:48.702 [2024-09-30 22:06:33.489553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:48.702 [2024-09-30 22:06:33.489560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.702 [2024-09-30 22:06:33.489576] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:48.702 [2024-09-30 22:06:33.489598] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:48.702 [2024-09-30 22:06:33.489632] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:48.702 [2024-09-30 22:06:33.489649] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:48.702 [2024-09-30 22:06:33.489753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:48.702 [2024-09-30 22:06:33.489766] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:48.702 [2024-09-30 22:06:33.489776] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:48.702 [2024-09-30 22:06:33.489789] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:48.702 [2024-09-30 22:06:33.489797] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:48.702 [2024-09-30 22:06:33.489807] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:48.702 [2024-09-30 22:06:33.489814] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:48.702 [2024-09-30 22:06:33.489821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:48.702 [2024-09-30 22:06:33.489830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:48.702 [2024-09-30 22:06:33.489840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.702 [2024-09-30 22:06:33.489847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:48.702 [2024-09-30 22:06:33.489854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:24:48.702 [2024-09-30 22:06:33.489861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.702 [2024-09-30 22:06:33.489942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.702 [2024-09-30 22:06:33.489950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:48.702 [2024-09-30 22:06:33.489959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:48.702 [2024-09-30 22:06:33.489965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.702 [2024-09-30 22:06:33.490076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:48.702 [2024-09-30 22:06:33.490090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:48.702 [2024-09-30 22:06:33.490098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:48.702 [2024-09-30 22:06:33.490105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:48.702 [2024-09-30 22:06:33.490121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:48.702 [2024-09-30 22:06:33.490139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:48.702 [2024-09-30 22:06:33.490147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:48.702 [2024-09-30 22:06:33.490166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:48.702 [2024-09-30 22:06:33.490174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:48.702 [2024-09-30 22:06:33.490181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:48.702 [2024-09-30 22:06:33.490336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:48.702 [2024-09-30 22:06:33.490368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:48.702 [2024-09-30 22:06:33.490390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:48.702 [2024-09-30 22:06:33.490434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:48.702 [2024-09-30 22:06:33.490495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:48.702 [2024-09-30 22:06:33.490538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.702 [2024-09-30 22:06:33.490579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:48.702 [2024-09-30 22:06:33.490598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.702 [2024-09-30 22:06:33.490685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:48.702 [2024-09-30 22:06:33.490706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.702 [2024-09-30 22:06:33.490784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:48.702 [2024-09-30 22:06:33.490802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.702 [2024-09-30 22:06:33.490837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:48.702 [2024-09-30 22:06:33.490855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:48.702 [2024-09-30 22:06:33.490904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:48.702 [2024-09-30 22:06:33.490925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:48.702 [2024-09-30 22:06:33.490943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:48.702 [2024-09-30 22:06:33.490960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:48.702 [2024-09-30 22:06:33.490978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:48.702 [2024-09-30 22:06:33.490999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:48.702 [2024-09-30 22:06:33.491038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.702 [2024-09-30 22:06:33.491086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:48.702 [2024-09-30 22:06:33.491106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:48.702 [2024-09-30 22:06:33.491143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.702 [2024-09-30 22:06:33.491164] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:48.702 [2024-09-30 22:06:33.491182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:48.702 [2024-09-30 22:06:33.491212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:48.702 [2024-09-30 22:06:33.491230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.702 [2024-09-30 22:06:33.491285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:48.702 [2024-09-30 22:06:33.491307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:48.702 [2024-09-30 22:06:33.491326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:48.702 [2024-09-30 22:06:33.491345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:48.702 [2024-09-30 22:06:33.491362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:48.702 [2024-09-30 22:06:33.491371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:48.702 [2024-09-30 22:06:33.491379] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:48.702 [2024-09-30 22:06:33.491396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.702 [2024-09-30 22:06:33.491404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:48.702 [2024-09-30 22:06:33.491411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:48.702 [2024-09-30 22:06:33.491418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:48.702 [2024-09-30 22:06:33.491425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:48.702 [2024-09-30 22:06:33.491433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:48.702 [2024-09-30 22:06:33.491440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:48.702 [2024-09-30 22:06:33.491447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:48.703 [2024-09-30 22:06:33.491453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:48.703 [2024-09-30 22:06:33.491460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:48.703 [2024-09-30 22:06:33.491467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:48.703 [2024-09-30 22:06:33.491474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:48.703 [2024-09-30 22:06:33.491481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:48.703 [2024-09-30 22:06:33.491488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:48.703 [2024-09-30 22:06:33.491495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:48.703 [2024-09-30 22:06:33.491502] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:48.703 [2024-09-30 22:06:33.491512] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.703 [2024-09-30 22:06:33.491522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:48.703 [2024-09-30 22:06:33.491529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:48.703 [2024-09-30 22:06:33.491536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:48.703 [2024-09-30 22:06:33.491543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:48.703 [2024-09-30 22:06:33.491550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.703 [2024-09-30 22:06:33.491557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:48.703 [2024-09-30 22:06:33.491571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.543 ms 00:24:48.703 [2024-09-30 22:06:33.491578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.703 [2024-09-30 22:06:33.508276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.703 [2024-09-30 22:06:33.508414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.703 [2024-09-30 22:06:33.508487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.639 ms 00:24:48.703 [2024-09-30 22:06:33.508515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.703 [2024-09-30 22:06:33.508626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.703 [2024-09-30 22:06:33.508660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:48.703 [2024-09-30 22:06:33.508685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:48.703 [2024-09-30 22:06:33.508755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.517133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.517270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:48.962 [2024-09-30 22:06:33.517329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.296 ms 00:24:48.962 [2024-09-30 22:06:33.517360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.517429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.517462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:48.962 [2024-09-30 22:06:33.517574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:48.962 [2024-09-30 22:06:33.517604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.517693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.517757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:48.962 [2024-09-30 22:06:33.517787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:24:48.962 [2024-09-30 22:06:33.517807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.517945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.517970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:48.962 [2024-09-30 22:06:33.517996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:48.962 [2024-09-30 22:06:33.518059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.522741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.522845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:48.962 [2024-09-30 22:06:33.522902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:24:48.962 [2024-09-30 22:06:33.522929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.523181] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:48.962 [2024-09-30 22:06:33.523287] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:48.962 [2024-09-30 22:06:33.523351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.523371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:48.962 [2024-09-30 22:06:33.523419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:24:48.962 [2024-09-30 22:06:33.523441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.535691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.535778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:48.962 [2024-09-30 22:06:33.535839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.219 ms 00:24:48.962 [2024-09-30 22:06:33.535869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.536248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.536345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:48.962 [2024-09-30 22:06:33.536404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:48.962 [2024-09-30 22:06:33.536431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.536523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.536551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:48.962 [2024-09-30 22:06:33.536580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:24:48.962 [2024-09-30 22:06:33.536633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.536940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.537009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:48.962 [2024-09-30 22:06:33.537056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:24:48.962 [2024-09-30 22:06:33.537077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.537111] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:24:48.962 [2024-09-30 22:06:33.537142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.537217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:48.962 [2024-09-30 22:06:33.537241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:48.962 [2024-09-30 22:06:33.537259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.545215] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:48.962 [2024-09-30 22:06:33.545403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.545469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:48.962 [2024-09-30 22:06:33.545514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.090 ms 00:24:48.962 [2024-09-30 22:06:33.545535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.547886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.547975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:48.962 [2024-09-30 22:06:33.548021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:24:48.962 [2024-09-30 22:06:33.548042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.962 [2024-09-30 22:06:33.548102] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:24:48.962 [2024-09-30 22:06:33.548711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.962 [2024-09-30 22:06:33.548782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:48.962 [2024-09-30 22:06:33.548827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:24:48.963 [2024-09-30 22:06:33.548852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.963 [2024-09-30 22:06:33.548934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.963 [2024-09-30 22:06:33.548962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:48.963 [2024-09-30 22:06:33.548982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:48.963 [2024-09-30 22:06:33.549026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.963 [2024-09-30 22:06:33.549078] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:48.963 [2024-09-30 22:06:33.549101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.963 [2024-09-30 22:06:33.549120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:48.963 [2024-09-30 22:06:33.549162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:48.963 [2024-09-30 22:06:33.549185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.963 [2024-09-30 22:06:33.552646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.963 [2024-09-30 22:06:33.552747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:48.963 [2024-09-30 22:06:33.552799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.383 ms 00:24:48.963 [2024-09-30 22:06:33.552824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.963 [2024-09-30 22:06:33.552923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.963 [2024-09-30 22:06:33.552948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:48.963 [2024-09-30 22:06:33.552996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:48.963 [2024-09-30 22:06:33.553017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.963 [2024-09-30 22:06:33.553864] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 66.884 ms, result 0 00:25:10.025  Copying: 50/1024 [MB] (50 MBps) Copying: 100/1024 [MB] (50 MBps) Copying: 148/1024 [MB] (48 MBps) Copying: 198/1024 [MB] (50 MBps) Copying: 248/1024 [MB] (50 MBps) Copying: 299/1024 [MB] (50 MBps) Copying: 348/1024 [MB] (48 MBps) Copying: 398/1024 [MB] (50 MBps) Copying: 448/1024 [MB] (49 MBps) Copying: 498/1024 [MB] (49 MBps) Copying: 548/1024 [MB] (50 MBps) Copying: 598/1024 [MB] (49 MBps) Copying: 647/1024 [MB] (48 MBps) Copying: 696/1024 [MB] (49 MBps) Copying: 746/1024 [MB] (50 MBps) Copying: 795/1024 [MB] (48 MBps) Copying: 842/1024 [MB] (47 MBps) Copying: 890/1024 [MB] (47 MBps) Copying: 939/1024 [MB] (49 MBps) Copying: 984/1024 [MB] (45 MBps) Copying: 1024/1024 [MB] (average 49 MBps)[2024-09-30 22:06:54.571843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-09-30 22:06:54.571899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:10.025 [2024-09-30 22:06:54.571920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:10.025 [2024-09-30 22:06:54.571928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-09-30 22:06:54.571948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:10.025 [2024-09-30 22:06:54.572410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-09-30 22:06:54.572428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:10.025 [2024-09-30 22:06:54.572437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:25:10.025 [2024-09-30 22:06:54.572444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-09-30 22:06:54.572653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-09-30 22:06:54.572663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:10.025 [2024-09-30 22:06:54.572672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:25:10.025 [2024-09-30 22:06:54.572680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-09-30 22:06:54.572706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-09-30 22:06:54.572714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:25:10.025 [2024-09-30 22:06:54.572723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:10.025 [2024-09-30 22:06:54.572731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-09-30 22:06:54.572779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.025 [2024-09-30 22:06:54.572791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:25:10.025 [2024-09-30 22:06:54.572800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:10.025 [2024-09-30 22:06:54.572808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.025 [2024-09-30 22:06:54.572822] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:10.025 [2024-09-30 22:06:54.572836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:10.025 [2024-09-30 22:06:54.572846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.572998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:10.025 [2024-09-30 22:06:54.573135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:10.026 [2024-09-30 22:06:54.573599] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:10.026 [2024-09-30 22:06:54.573611] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e32ab2ae-aa10-4127-98b2-3339952528db 00:25:10.026 [2024-09-30 22:06:54.573618] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:10.026 [2024-09-30 22:06:54.573626] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1312 00:25:10.026 [2024-09-30 22:06:54.573633] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 1280 00:25:10.026 [2024-09-30 22:06:54.573640] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0250 00:25:10.026 [2024-09-30 22:06:54.573649] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:10.026 [2024-09-30 22:06:54.573657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:10.026 [2024-09-30 22:06:54.573668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:10.026 [2024-09-30 22:06:54.573674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:10.026 [2024-09-30 22:06:54.573680] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:10.026 [2024-09-30 22:06:54.573687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.026 [2024-09-30 22:06:54.573693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:10.026 [2024-09-30 22:06:54.573701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.865 ms 00:25:10.026 [2024-09-30 22:06:54.573708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.026 [2024-09-30 22:06:54.575492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.026 [2024-09-30 22:06:54.575598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:10.026 [2024-09-30 22:06:54.575665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:25:10.026 [2024-09-30 22:06:54.575687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.026 [2024-09-30 22:06:54.575775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.026 [2024-09-30 22:06:54.575870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:10.026 [2024-09-30 22:06:54.575893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:10.026 [2024-09-30 22:06:54.575911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.026 [2024-09-30 22:06:54.580456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.026 [2024-09-30 22:06:54.580560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.026 [2024-09-30 22:06:54.580607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.026 [2024-09-30 22:06:54.580629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.026 [2024-09-30 22:06:54.580697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.026 [2024-09-30 22:06:54.580813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.026 [2024-09-30 22:06:54.580836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.026 [2024-09-30 22:06:54.580854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.026 [2024-09-30 22:06:54.580895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.026 [2024-09-30 22:06:54.580917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.026 [2024-09-30 22:06:54.580978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.026 [2024-09-30 22:06:54.581004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.026 [2024-09-30 22:06:54.581036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.026 [2024-09-30 22:06:54.581057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.027 [2024-09-30 22:06:54.581075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.581093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.591010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.027 [2024-09-30 22:06:54.591146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.027 [2024-09-30 22:06:54.591217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.591294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.602457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.027 [2024-09-30 22:06:54.602600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.027 [2024-09-30 22:06:54.602654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.602685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.603008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.027 [2024-09-30 22:06:54.603035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:10.027 [2024-09-30 22:06:54.603093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.603119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.603164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.027 [2024-09-30 22:06:54.603198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:10.027 [2024-09-30 22:06:54.603219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.603355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.603429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.027 [2024-09-30 22:06:54.603452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:10.027 [2024-09-30 22:06:54.603580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.603601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.603645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.027 [2024-09-30 22:06:54.603667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:10.027 [2024-09-30 22:06:54.603723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.603744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.603789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.027 [2024-09-30 22:06:54.603818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:10.027 [2024-09-30 22:06:54.603889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.603908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.604004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:10.027 [2024-09-30 22:06:54.604031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:10.027 [2024-09-30 22:06:54.604050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:10.027 [2024-09-30 22:06:54.604182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.027 [2024-09-30 22:06:54.604516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 32.644 ms, result 0 00:25:10.027 00:25:10.027 00:25:10.027 22:06:54 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:12.556 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:12.556 Process with pid 90616 is not found 00:25:12.556 Remove shared memory files 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 90616 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 90616 ']' 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 90616 00:25:12.556 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90616) - No such process 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 90616 is not found' 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_band_md /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_l2p_l1 /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_l2p_l2 /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_l2p_l2_ctx /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_nvc_md /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_p2l_pool /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_sb /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_sb_shm /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_trim_bitmap /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_trim_log /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_trim_md /dev/hugepages/ftl_e32ab2ae-aa10-4127-98b2-3339952528db_vmap 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:25:12.556 ************************************ 00:25:12.556 END TEST ftl_restore_fast 00:25:12.556 ************************************ 00:25:12.556 00:25:12.556 real 1m54.967s 00:25:12.556 user 1m45.770s 00:25:12.556 sys 0m10.813s 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:12.556 22:06:56 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:25:12.556 22:06:57 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:25:12.556 22:06:57 ftl -- ftl/ftl.sh@14 -- # killprocess 85200 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@950 -- # '[' -z 85200 ']' 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@954 -- # kill -0 85200 00:25:12.556 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (85200) - No such process 00:25:12.556 Process with pid 85200 is not found 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 85200 is not found' 00:25:12.556 22:06:57 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:25:12.556 22:06:57 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=91833 00:25:12.556 22:06:57 ftl -- ftl/ftl.sh@20 -- # waitforlisten 91833 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@831 -- # '[' -z 91833 ']' 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:12.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:12.556 22:06:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:12.556 22:06:57 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:12.556 [2024-09-30 22:06:57.099723] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.11.0-rc0 initialization... 00:25:12.557 [2024-09-30 22:06:57.099855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91833 ] 00:25:12.557 [2024-09-30 22:06:57.228080] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:12.557 [2024-09-30 22:06:57.248495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.557 [2024-09-30 22:06:57.282326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.122 22:06:57 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:13.122 22:06:57 ftl -- common/autotest_common.sh@864 -- # return 0 00:25:13.122 22:06:57 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:13.379 nvme0n1 00:25:13.638 22:06:58 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:25:13.638 22:06:58 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:13.638 22:06:58 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:13.638 22:06:58 ftl -- ftl/common.sh@28 -- # stores=9e21a614-2045-458d-91c1-63a00a2877e8 00:25:13.638 22:06:58 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:25:13.638 22:06:58 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9e21a614-2045-458d-91c1-63a00a2877e8 00:25:13.897 22:06:58 ftl -- ftl/ftl.sh@23 -- # killprocess 91833 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@950 -- # '[' -z 91833 ']' 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@954 -- # kill -0 91833 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@955 -- # uname 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91833 00:25:13.897 killing process with pid 91833 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91833' 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@969 -- # kill 91833 00:25:13.897 22:06:58 ftl -- common/autotest_common.sh@974 -- # wait 91833 00:25:14.155 22:06:58 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:14.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.413 Waiting for block devices as requested 00:25:14.413 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:14.413 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:14.671 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:14.671 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:19.933 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:19.933 22:07:04 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:25:19.933 22:07:04 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:19.933 Remove shared memory files 00:25:19.933 22:07:04 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:25:19.933 22:07:04 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:25:19.933 22:07:04 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:25:19.933 22:07:04 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:19.933 22:07:04 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:25:19.933 00:25:19.933 real 9m17.250s 00:25:19.933 user 10m54.737s 00:25:19.933 sys 1m11.828s 00:25:19.933 22:07:04 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:19.933 22:07:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:19.933 ************************************ 00:25:19.933 END TEST ftl 00:25:19.933 ************************************ 00:25:19.933 22:07:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:19.933 22:07:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:19.933 22:07:04 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:19.933 22:07:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:19.933 22:07:04 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:19.933 22:07:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:19.933 22:07:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:19.933 22:07:04 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:19.933 22:07:04 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:19.933 22:07:04 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:19.933 22:07:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:19.933 22:07:04 -- common/autotest_common.sh@10 -- # set +x 00:25:19.933 22:07:04 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:19.933 22:07:04 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:19.933 22:07:04 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:19.933 22:07:04 -- common/autotest_common.sh@10 -- # set +x 00:25:20.868 INFO: APP EXITING 00:25:20.868 INFO: killing all VMs 00:25:20.868 INFO: killing vhost app 00:25:20.868 INFO: EXIT DONE 00:25:21.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:21.386 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:21.386 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:21.386 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:25:21.386 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:25:21.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:21.907 Cleaning 00:25:21.907 Removing: /var/run/dpdk/spdk0/config 00:25:21.907 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:22.166 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:22.166 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:22.166 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:22.166 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:22.167 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:22.167 Removing: /var/run/dpdk/spdk0 00:25:22.167 Removing: /var/run/dpdk/spdk_pid70609 00:25:22.167 Removing: /var/run/dpdk/spdk_pid70772 00:25:22.167 Removing: /var/run/dpdk/spdk_pid70974 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71056 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71085 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71196 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71209 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71391 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71465 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71544 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71639 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71719 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71759 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71790 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71860 00:25:22.167 Removing: /var/run/dpdk/spdk_pid71966 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72386 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72433 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72480 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72496 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72554 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72570 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72628 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72644 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72686 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72704 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72746 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72764 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72891 00:25:22.167 Removing: /var/run/dpdk/spdk_pid72933 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73011 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73172 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73245 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73276 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73703 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73796 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73895 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73939 00:25:22.167 Removing: /var/run/dpdk/spdk_pid73959 00:25:22.167 Removing: /var/run/dpdk/spdk_pid74043 00:25:22.167 Removing: /var/run/dpdk/spdk_pid74663 00:25:22.167 Removing: /var/run/dpdk/spdk_pid74693 00:25:22.167 Removing: /var/run/dpdk/spdk_pid75162 00:25:22.167 Removing: /var/run/dpdk/spdk_pid75260 00:25:22.167 Removing: /var/run/dpdk/spdk_pid75364 00:25:22.167 Removing: /var/run/dpdk/spdk_pid75406 00:25:22.167 Removing: /var/run/dpdk/spdk_pid75432 00:25:22.167 Removing: /var/run/dpdk/spdk_pid75457 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77304 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77419 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77423 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77446 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77486 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77490 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77502 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77541 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77545 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77557 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77602 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77606 00:25:22.167 Removing: /var/run/dpdk/spdk_pid77618 00:25:22.167 Removing: /var/run/dpdk/spdk_pid78980 00:25:22.167 Removing: /var/run/dpdk/spdk_pid79066 00:25:22.167 Removing: /var/run/dpdk/spdk_pid80455 00:25:22.167 Removing: /var/run/dpdk/spdk_pid81822 00:25:22.167 Removing: /var/run/dpdk/spdk_pid81882 00:25:22.167 Removing: /var/run/dpdk/spdk_pid81935 00:25:22.167 Removing: /var/run/dpdk/spdk_pid81996 00:25:22.167 Removing: /var/run/dpdk/spdk_pid82073 00:25:22.167 Removing: /var/run/dpdk/spdk_pid82142 00:25:22.167 Removing: /var/run/dpdk/spdk_pid82278 00:25:22.167 Removing: /var/run/dpdk/spdk_pid82620 00:25:22.167 Removing: /var/run/dpdk/spdk_pid82651 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83073 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83250 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83340 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83444 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83486 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83506 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83811 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83849 00:25:22.167 Removing: /var/run/dpdk/spdk_pid83894 00:25:22.167 Removing: /var/run/dpdk/spdk_pid84268 00:25:22.167 Removing: /var/run/dpdk/spdk_pid84412 00:25:22.167 Removing: /var/run/dpdk/spdk_pid85200 00:25:22.167 Removing: /var/run/dpdk/spdk_pid85321 00:25:22.167 Removing: /var/run/dpdk/spdk_pid85487 00:25:22.167 Removing: /var/run/dpdk/spdk_pid85562 00:25:22.167 Removing: /var/run/dpdk/spdk_pid85845 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86065 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86400 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86561 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86639 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86675 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86752 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86766 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86804 00:25:22.167 Removing: /var/run/dpdk/spdk_pid86952 00:25:22.167 Removing: /var/run/dpdk/spdk_pid87156 00:25:22.167 Removing: /var/run/dpdk/spdk_pid87406 00:25:22.167 Removing: /var/run/dpdk/spdk_pid87658 00:25:22.167 Removing: /var/run/dpdk/spdk_pid87923 00:25:22.167 Removing: /var/run/dpdk/spdk_pid88254 00:25:22.167 Removing: /var/run/dpdk/spdk_pid88379 00:25:22.167 Removing: /var/run/dpdk/spdk_pid88455 00:25:22.167 Removing: /var/run/dpdk/spdk_pid88805 00:25:22.167 Removing: /var/run/dpdk/spdk_pid88854 00:25:22.425 Removing: /var/run/dpdk/spdk_pid89139 00:25:22.425 Removing: /var/run/dpdk/spdk_pid89399 00:25:22.425 Removing: /var/run/dpdk/spdk_pid89727 00:25:22.425 Removing: /var/run/dpdk/spdk_pid89837 00:25:22.425 Removing: /var/run/dpdk/spdk_pid89864 00:25:22.425 Removing: /var/run/dpdk/spdk_pid89921 00:25:22.425 Removing: /var/run/dpdk/spdk_pid89967 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90021 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90194 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90249 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90300 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90356 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90377 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90466 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90616 00:25:22.426 Removing: /var/run/dpdk/spdk_pid90802 00:25:22.426 Removing: /var/run/dpdk/spdk_pid91048 00:25:22.426 Removing: /var/run/dpdk/spdk_pid91305 00:25:22.426 Removing: /var/run/dpdk/spdk_pid91563 00:25:22.426 Removing: /var/run/dpdk/spdk_pid91833 00:25:22.426 Clean 00:25:22.426 22:07:07 -- common/autotest_common.sh@1451 -- # return 0 00:25:22.426 22:07:07 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:22.426 22:07:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.426 22:07:07 -- common/autotest_common.sh@10 -- # set +x 00:25:22.426 22:07:07 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:22.426 22:07:07 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.426 22:07:07 -- common/autotest_common.sh@10 -- # set +x 00:25:22.426 22:07:07 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:22.426 22:07:07 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:22.426 22:07:07 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:22.426 22:07:07 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:22.426 22:07:07 -- spdk/autotest.sh@394 -- # hostname 00:25:22.426 22:07:07 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:22.684 geninfo: WARNING: invalid characters removed from testname! 00:25:49.237 22:07:30 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:49.237 22:07:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:51.140 22:07:35 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:53.042 22:07:37 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:55.659 22:07:39 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:57.561 22:07:42 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:58.931 22:07:43 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:59.190 22:07:43 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:25:59.190 22:07:43 -- common/autotest_common.sh@1681 -- $ lcov --version 00:25:59.190 22:07:43 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:25:59.190 22:07:43 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:25:59.190 22:07:43 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:25:59.190 22:07:43 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:25:59.190 22:07:43 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:25:59.190 22:07:43 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:59.190 22:07:43 -- scripts/common.sh@336 -- $ read -ra ver1 00:25:59.190 22:07:43 -- scripts/common.sh@337 -- $ IFS=.-: 00:25:59.190 22:07:43 -- scripts/common.sh@337 -- $ read -ra ver2 00:25:59.190 22:07:43 -- scripts/common.sh@338 -- $ local 'op=<' 00:25:59.190 22:07:43 -- scripts/common.sh@340 -- $ ver1_l=2 00:25:59.190 22:07:43 -- scripts/common.sh@341 -- $ ver2_l=1 00:25:59.190 22:07:43 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:25:59.190 22:07:43 -- scripts/common.sh@344 -- $ case "$op" in 00:25:59.190 22:07:43 -- scripts/common.sh@345 -- $ : 1 00:25:59.190 22:07:43 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:25:59.190 22:07:43 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.190 22:07:43 -- scripts/common.sh@365 -- $ decimal 1 00:25:59.190 22:07:43 -- scripts/common.sh@353 -- $ local d=1 00:25:59.190 22:07:43 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:59.190 22:07:43 -- scripts/common.sh@355 -- $ echo 1 00:25:59.190 22:07:43 -- scripts/common.sh@365 -- $ ver1[v]=1 00:25:59.190 22:07:43 -- scripts/common.sh@366 -- $ decimal 2 00:25:59.190 22:07:43 -- scripts/common.sh@353 -- $ local d=2 00:25:59.190 22:07:43 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:59.190 22:07:43 -- scripts/common.sh@355 -- $ echo 2 00:25:59.190 22:07:43 -- scripts/common.sh@366 -- $ ver2[v]=2 00:25:59.190 22:07:43 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:25:59.190 22:07:43 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:25:59.190 22:07:43 -- scripts/common.sh@368 -- $ return 0 00:25:59.190 22:07:43 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.190 22:07:43 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:25:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.190 --rc genhtml_branch_coverage=1 00:25:59.190 --rc genhtml_function_coverage=1 00:25:59.190 --rc genhtml_legend=1 00:25:59.190 --rc geninfo_all_blocks=1 00:25:59.190 --rc geninfo_unexecuted_blocks=1 00:25:59.190 00:25:59.190 ' 00:25:59.190 22:07:43 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:25:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.190 --rc genhtml_branch_coverage=1 00:25:59.190 --rc genhtml_function_coverage=1 00:25:59.190 --rc genhtml_legend=1 00:25:59.190 --rc geninfo_all_blocks=1 00:25:59.190 --rc geninfo_unexecuted_blocks=1 00:25:59.190 00:25:59.190 ' 00:25:59.190 22:07:43 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:25:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.190 --rc genhtml_branch_coverage=1 00:25:59.190 --rc genhtml_function_coverage=1 00:25:59.190 --rc genhtml_legend=1 00:25:59.190 --rc geninfo_all_blocks=1 00:25:59.190 --rc geninfo_unexecuted_blocks=1 00:25:59.190 00:25:59.190 ' 00:25:59.190 22:07:43 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:25:59.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.190 --rc genhtml_branch_coverage=1 00:25:59.190 --rc genhtml_function_coverage=1 00:25:59.190 --rc genhtml_legend=1 00:25:59.190 --rc geninfo_all_blocks=1 00:25:59.190 --rc geninfo_unexecuted_blocks=1 00:25:59.190 00:25:59.190 ' 00:25:59.190 22:07:43 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.190 22:07:43 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:59.190 22:07:43 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:59.190 22:07:43 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.190 22:07:43 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.190 22:07:43 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.190 22:07:43 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.190 22:07:43 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.190 22:07:43 -- paths/export.sh@5 -- $ export PATH 00:25:59.190 22:07:43 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.190 22:07:43 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:59.190 22:07:43 -- common/autobuild_common.sh@479 -- $ date +%s 00:25:59.190 22:07:43 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727734063.XXXXXX 00:25:59.190 22:07:43 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727734063.QGq471 00:25:59.190 22:07:43 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:25:59.190 22:07:43 -- common/autobuild_common.sh@485 -- $ '[' -n main ']' 00:25:59.190 22:07:43 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:59.190 22:07:43 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:59.190 22:07:43 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:59.190 22:07:43 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:59.190 22:07:43 -- common/autobuild_common.sh@495 -- $ get_config_params 00:25:59.190 22:07:43 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:25:59.190 22:07:43 -- common/autotest_common.sh@10 -- $ set +x 00:25:59.190 22:07:43 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:25:59.190 22:07:43 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:25:59.190 22:07:43 -- pm/common@17 -- $ local monitor 00:25:59.190 22:07:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:59.190 22:07:43 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:59.190 22:07:43 -- pm/common@25 -- $ sleep 1 00:25:59.190 22:07:43 -- pm/common@21 -- $ date +%s 00:25:59.190 22:07:43 -- pm/common@21 -- $ date +%s 00:25:59.190 22:07:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727734063 00:25:59.191 22:07:43 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727734063 00:25:59.191 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727734063_collect-cpu-load.pm.log 00:25:59.191 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727734063_collect-vmstat.pm.log 00:26:00.151 22:07:44 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:26:00.151 22:07:44 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:26:00.151 22:07:44 -- spdk/autopackage.sh@14 -- $ timing_finish 00:26:00.151 22:07:44 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:00.151 22:07:44 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:00.151 22:07:44 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:00.151 22:07:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:00.151 22:07:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:00.151 22:07:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:00.151 22:07:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:00.151 22:07:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:00.151 22:07:44 -- pm/common@44 -- $ pid=93532 00:26:00.151 22:07:44 -- pm/common@50 -- $ kill -TERM 93532 00:26:00.151 22:07:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:00.151 22:07:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:00.151 22:07:44 -- pm/common@44 -- $ pid=93533 00:26:00.151 22:07:44 -- pm/common@50 -- $ kill -TERM 93533 00:26:00.151 + [[ -n 5753 ]] 00:26:00.151 + sudo kill 5753 00:26:00.158 [Pipeline] } 00:26:00.173 [Pipeline] // timeout 00:26:00.178 [Pipeline] } 00:26:00.193 [Pipeline] // stage 00:26:00.198 [Pipeline] } 00:26:00.213 [Pipeline] // catchError 00:26:00.221 [Pipeline] stage 00:26:00.223 [Pipeline] { (Stop VM) 00:26:00.236 [Pipeline] sh 00:26:00.515 + vagrant halt 00:26:03.049 ==> default: Halting domain... 00:26:08.366 [Pipeline] sh 00:26:08.643 + vagrant destroy -f 00:26:11.180 ==> default: Removing domain... 00:26:11.756 [Pipeline] sh 00:26:12.031 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:26:12.039 [Pipeline] } 00:26:12.055 [Pipeline] // stage 00:26:12.059 [Pipeline] } 00:26:12.073 [Pipeline] // dir 00:26:12.078 [Pipeline] } 00:26:12.091 [Pipeline] // wrap 00:26:12.096 [Pipeline] } 00:26:12.108 [Pipeline] // catchError 00:26:12.117 [Pipeline] stage 00:26:12.119 [Pipeline] { (Epilogue) 00:26:12.131 [Pipeline] sh 00:26:12.408 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:17.687 [Pipeline] catchError 00:26:17.689 [Pipeline] { 00:26:17.702 [Pipeline] sh 00:26:17.980 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:17.981 Artifacts sizes are good 00:26:17.988 [Pipeline] } 00:26:18.002 [Pipeline] // catchError 00:26:18.013 [Pipeline] archiveArtifacts 00:26:18.019 Archiving artifacts 00:26:18.141 [Pipeline] cleanWs 00:26:18.192 [WS-CLEANUP] Deleting project workspace... 00:26:18.192 [WS-CLEANUP] Deferred wipeout is used... 00:26:18.213 [WS-CLEANUP] done 00:26:18.215 [Pipeline] } 00:26:18.230 [Pipeline] // stage 00:26:18.236 [Pipeline] } 00:26:18.250 [Pipeline] // node 00:26:18.255 [Pipeline] End of Pipeline 00:26:18.301 Finished: SUCCESS